Sunday, February 4, 2018

Leetcode solution 621 Task Scheduler

Problem Statement 

Given a char array representing tasks CPU need to do. It contains capital letters A to Z where different letters represent different tasks.Tasks could be done without original order. Each task could be done in one interval. For each interval, CPU could finish one task or just be idle.
However, there is a non-negative cooling interval n that means between two same tasks, there must be at least n intervals that CPU are doing different tasks or just be idle.
You need to return the least number of intervals the CPU will take to finish all the given tasks.
Example 1:
Input: tasks = ["A","A","A","B","B","B"], n = 2
Output: 8
Explanation: A -> B -> idle -> A -> B -> idle -> A -> B.
Note:
  1. The number of tasks is in the range [1, 10000].
  2. The integer n is in the range [0, 100].
Problem link

Video Tutorial

You can find the detailed video tutorial here


Thought Process

  1. Array is not sorted, could it be sorted?
  2. Task is a finite set, 26 maximum. 
  3. It is about extreme value (min value in this case), can Dynamic Programming (DP) be applied?
  4. DP seems cannot be applied here, it is very hard to come up with a mathematical induction function, even though there is a solution by coming up with a math formula, but it is very hard to come up with this in a real interview session.  
  5. Can we do any binary search hear, nope
  6. Hmm... it seems maybe we can be greedy here, doing it in a round robin way. Schedule the largest job first, then 2nd largest job, then 3rd largest job until the internal is met. Note, we have to always apply the largest job as soon as possible, for example, if we have a test case [A, A, A, A, A, B, C, D, E] and interval n is 2, if we are doing strict round robin, it would be A, B, C, D, E, A, idle, idle, A, idle, idle, A, idle, idle, A -> 15 cycle, where we should really apply A as soon as possible, so it becomes A, B, C, A, D, E, A, idle, idle, A, idle, idle, A -> 13 cycle.

Solutions

Sort after every iteration 

Given the greedy sort, we want to always apply the largest job first, so we should sort the jobs first (job name does not matter), after meeting the interval requirement, sort the array again and repeat by applying the largest job first, then 2nd largest etc.

Use priority queue (max heap) 

Instead of sorting after every iteration, we can use a max heap to always pop out the largest element, and instead of sort O(NlgN) on every iteration, it will just be O(lgN) to heaptify, even though N is fixed 26 on size.

References

Monday, January 15, 2018

Leetcode solution 162 Find Peak Element

Problem Statement 

A peak element is an element that is greater than its neighbors. Given an input array where num[i] ≠ num[i+1], find a peak element and return its index. The array may contain multiple peaks, in that case return the index to any one of the peaks is fine. You may imagine that num[-1] = num[n] = -∞. For example, in array [1, 2, 3, 1], 3 is a peak element and your function should return the index number 2.

Problem link

You can find the detailed video tutorial here:

A few edge cases to think about

  • Null array, one element array 
  • Strictly increasing array 
  • Strictly decreasing array 
  • Array with all same element (won't happen in this case)

Here is an example:


Code Sample

A few follow up
  • Use first and last element as negative infinite (Solution: just skip the first and last then)
  • There could be same elements and they can be equal. (Solution: same thing, just need to change your comparison logic in this case)

References

  • https://www.youtube.com/watch?v=NFvAD5na5oU
  • https://www.tangjikai.com/algorithms/leetcode-162-find-peak-element
  • https://leetcode.com/problems/find-peak-element/solution/
     


     

Monday, October 27, 2014

[面试数据结构总结1] 牵一发而不动全身,Consistent Hashing, 一致性哈希算法


包子IT面试培训 助你拿到理想的offer! 

有问题,问包子!Got question? Ask Baozi!

Consistent Hashing 是一个经常会被问到的数据结构,在实际工程中也非常有用,比如在cache 系统中,partition系统,积极distributed hash table中,都会用到。这里转载了Tom White一篇网上写的很赞的文章,非常清楚的讲解了consistent hashing里面这个环式如何工作的。wiki里面还有类似的一个算法,HRW (Highest Random Weight), 也比较简单。总结一下,consistent hashing就是要做到增加或者删除节点的时候,要尽可能少得影响其他node,正所谓牵一发,而不动全身。

I've bumped into consistent hashing a couple of times lately. The paper that introduced the idea (Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web by David Karger et al) appeared ten years ago, although recently it seems the idea has quietly been finding its way into more and more services, from Amazon's Dynamo to memcached (courtesy of Last.fm). So what is consistent hashing and why should you care?
The need for consistent hashing arose from limitations experienced while running collections of caching machines - web caches, for example. If you have a collection of n cache machines then a common way of load balancing across them is to put object o in cache machine number hash(o) mod n. This works well until you add or remove cache machines (for whatever reason), for then n changes and every object is hashed to a new location. This can be catastrophic since the originating content servers are swamped with requests from the cache machines. It's as if the cache suddenly disappeared. Which it has, in a sense. (This is why you should care - consistent hashing is needed to avoid swamping your servers!)
It would be nice if, when a cache machine was added, it took its fair share of objects from all the other cache machines. Equally, when a cache machine was removed, it would be nice if its objects were shared between the remaining machines. This is exactly what consistent hashing does - consistently maps objects to the same cache machine, as far as is possible, at least.
The basic idea behind the consistent hashing algorithm is to hash both objects and caches using the same hash function. The reason to do this is to map the cache to an interval, which will contain a number of object hashes. If the cache is removed then its interval is taken over by a cache with an adjacent interval. All the other caches remain unchanged.

Demonstration

Let's look at this in more detail. The hash function actually maps objects and caches to a number range. This should be familiar to every Java programmer - the hashCode method on Object returns an int, which lies in the range -231 to 231-1. Imagine mapping this range into a circle so the values wrap around. Here's a picture of the circle with a number of objects (1, 2, 3, 4) and caches (A, B, C) marked at the points that they hash to (based on a diagram from Web Caching with Consistent Hashing by David Karger et al):
consistent_hashing_1.png
To find which cache an object goes in, we move clockwise round the circle until we find a cache point. So in the diagram above, we see object 1 and 4 belong in cache A, object 2 belongs in cache B and object 3 belongs in cache C. Consider what happens if cache C is removed: object 3 now belongs in cache A, and all the other object mappings are unchanged. If then another cache D is added in the position marked it will take objects 3 and 4, leaving only object 1 belonging to A.
consistent_hashing_2.png
This works well, except the size of the intervals assigned to each cache is pretty hit and miss. Since it is essentially random it is possible to have a very non-uniform distribution of objects between caches. The solution to this problem is to introduce the idea of "virtual nodes", which are replicas of cache points in the circle. So whenever we add a cache we create a number of points in the circle for it.
You can see the effect of this in the following plot which I produced by simulating storing 10,000 objects in 10 caches using the code described below. On the x-axis is the number of replicas of cache points (with a logarithmic scale). When it is small, we see that the distribution of objects across caches is unbalanced, since the standard deviation as a percentage of the mean number of objects per cache (on the y-axis, also logarithmic) is high. As the number of replicas increases the distribution of objects becomes more balanced. This experiment shows that a figure of one or two hundred replicas achieves an acceptable balance (a standard deviation that is roughly between 5% and 10% of the mean).
ch-graph.png


Implementation

For completeness here is a simple implementation in Java. In order for consistent hashing to be effective it is important to have a hash function that mixes well. Most implementations of Object's hashCode do not mix well - for example, they typically produce a restricted number of small integer values - so we have a HashFunction interface to allow a custom hash function to be used. MD5 hashes are recommended here.
import java.util.Collection;
import java.util.SortedMap;
import java.util.TreeMap;


public class ConsistentHash<T> {


private final HashFunction hashFunction;
private final int numberOfReplicas;
private final SortedMap<Integer, T> circle = new TreeMap<Integer, T>();


public ConsistentHash(HashFunction hashFunction, int numberOfReplicas,
Collection<T> nodes) {
  this.hashFunction = hashFunction;
  this.numberOfReplicas = numberOfReplicas;


  for (T node : nodes) {
add(node);
  }
}


public void add(T node) {
  for (int i = 0; i < numberOfReplicas; i++) {
circle.put(hashFunction.hash(node.toString() + i), node);
  }
}


public void remove(T node) {
  for (int i = 0; i < numberOfReplicas; i++) {
circle.remove(hashFunction.hash(node.toString() + i));
  }
}


public T get(Object key) {
  if (circle.isEmpty()) {
return null;
  }
  int hash = hashFunction.hash(key);
  if (!circle.containsKey(hash)) {
SortedMap<Integer, T> tailMap = circle.tailMap(hash);
hash = tailMap.isEmpty() ? circle.firstKey() : tailMap.firstKey();
  }
  return circle.get(hash);
}


}
The circle is represented as a sorted map of integers, which represent the hash values, to caches (of type T here).
When a ConsistentHash object is created each node is added to the circle map a number of times (controlled by numberOfReplicas). The location of each replica is chosen by hashing the node's name along with a numerical suffix, and the node is stored at each of these points in the map.
To find a node for an object (the get method), the hash value of the object is used to look in the map. Most of the time there will not be a node stored at this hash value (since the hash value space is typically much larger than the number of nodes, even with replicas), so the next node is found by looking for the first key in the tail map. If the tail map is empty then we wrap around the circle by getting the first key in the circle.

Usage

So how can you use consistent hashing? You are most likely to meet it in a library, rather than having to code it yourself. For example, as mentioned above, memcached, a distributed memory object caching system, now has clients that support consistent hashing. Last.fm's ketama by Richard Jones was the first, and there is now a Java implementation by Dustin Sallings (which inspired my simplified demonstration implementation above). It is interesting to note that it is only the client that needs to implement the consistent hashing algorithm - the memcached server is unchanged. Other systems that employ consistent hashing include Chord, which is a distributed hash table implementation, and Amazon's Dynamo, which is a key-value store (not available outside Amazon).


References