Showing posts with label java5Concurrency. Show all posts
Showing posts with label java5Concurrency. Show all posts

Wednesday, 29 June 2011

LinkedBlockingQueue vs SynchronousBlockingQueue


In case of producer consumer problem, the code of SynchronousBlockingQueue is almost identical to linked blocking queue one, but the application has an added benefit, in that SynchronousQueue will allow an insert into the queue only if there is a thread waiting to consume it.

As discussed here, synchronous blocking queue has capacity of zero. So it implements a rendezvous approach (producer waits until consumer is ready, consumer waits until producer is ready) behind the interface of Queue.
Also implementation of SynchronousQueue seems to be heavily optimized, so if you don't need anything more than a rendezvous point (as in the case of Executors.newCachedThreadPool(), where consumers are created "on-demand", so that queue items don't accumulate), you can get a performance gain by using SynchronousQueue.

How to make your own Thread Pool

Following are the steps to create your own thread pool :
  • create your ThreadPoolExecutor that includes specific BlockingQueue and Comparator.
  • override the beforeExecute and afterExecute
  • create your ThreadFactory that implement uncaughtException to prevent thread dead problem
Creating the ThreadPoolExecutor

class MyThreadPoolExecutor extends ThreadPoolExecutor{
protected MyThreadPoolExecutor(){
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, new PriorityBlockingQueue(maxCapacity,new MyComparator()),new MyThreadFactory());
}
@Override
protected void beforeExecute(Thread t,Runnable r){
/*do your stuff*/
super.beforeExecute(t, r);
}
@Override
protected void afterExecute(Runnable r, Throwable t) {
/*do your stuff*/
super.afterExecute(r, t);
}

}

Creating our ThreadFactory:

class MyThreadFactory implements ThreadFactory {
private final MyThreadGroup tg = new MyThreadGroup();
public Thread newThread(Runnable r) {
return new Thread(tg,r);
}
}
private class MyThreadGroup extends ThreadGroup{
private MyThreadGroup(){
super("MyThreadGroup");
}
public void uncaughtException(Thread t, Throwable e){
log.debug(t);
/*do something*/
}
}

Monday, 27 June 2011

A simple LRU cache in 5 lines

The applications usually need to cache information in memory. The most often used classes to do this in Java are HashMap and Hashtable. If you need to do any sophisticated caching, then you can use JBoss Cache, OSCache or EHCache. Even if you use an external caching system, you may still want to cache some information locally within an object just to have fast access. The problem with this approach is that, if you are not careful and do not control the size of this in-memory cache, then it may grow too big and affect the performance of your application.
A very simple solution to this problem is to set a maximum size for your in-memory cache and most preferably make it LRU (Least Recently Used). This way you will have a predictable memory utilization and only the items used recently will be kept in the cache.
Starting with JDK 1.4, a new (and very rarely used) collection class was added called LinkedHashMap. There are couple of benefits of using a LinkedHashMap:
  • It is possible to preserve the order of items in the map. So, the order of iteration through the items is same as the order of insertion. A special constructor is provided for this purpose. This is very useful when you already have a sorted collection of data and you want to do some processing on it and return it as a Map. Using a TreeMap (the only other map that allows iteration in a given order) is too expensive for this scenario.
  • It exposes a method removeEldestEntry(Map.Entry) that may be overridden to impose a policy for removing stale mappings automatically when new mappings are added to the map. This is what we are going to use to create a LRU cache.

Check out the following snippet for an example of simple LRU cache.
import java.util.*;

public class SimpleLRU {

private static final int MAX_ENTRIES = 50;

private Map mCache = new LinkedHashMap(MAX_ENTRIES, .75F, true) {
protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_ENTRIES;
}
};

public SimpleLRU() {
for(int i = 0; i < 100; i++) {
String numberStr = String.valueOf(i);
mCache.put(numberStr, numberStr);

System.out.print("\rSize = " + mCache.size() +  
          "\tCurrent value = " + i + "\tLast Value in cache = " 
          + mCache.get(numberStr));
try {
Thread.sleep(10);
} catch(InterruptedException ex) {
}
}

System.out.println("");
}

public static void main(String[] args) {
new SimpleLRU();
}
}

Monitors in java

Monitors are an other mechanism of concurrent programming. It’s a higher level mechanism than semaphores and also more powerful. A monitor is an instance of a class that can be used safely by several threads. All the methods of a monitor are executed with mutual exclusion. So at most one thread can execute a method of the monitor at the same time. This mutual exclusion policy makes easier to work with monitor and to develop the method content of the monitor.

Monitors have an other feature, the possibility to make a thread waiting for a condition. During the wait time, the thread temporarily gives up its exclusive access and must reacquire it after the condition has been met. You can also signal one or more threads that a condition has been met.
There is several advantages on using monitors instead of a lower-level mechanisms :
  • All the synchronization code is centralized in one location and the users of this code don’t need to know how it’s implemented.
  • The code doesn’t depend on the number of processes, it works for as many processes as you want
  • You don’t need to release something like a mutex, so you cannot forget to do it
When we must describe a monitor, we simple use the monitor keyword and describe the methods as common methods :
monitor SimpleMonitor {
public method void testA(){
//Some code
}

public method int testB(){
return 1;
}
}


To describe a condition variable, we use the cond keyword. A condition variable is a kind of queue of process who are waiting on the same condition. You have several operations available on a condition, the most important is to signal a process waiting to be awaken and to wait on a condition. There are some similarities between signal/wait operations and P and V of semaphores, but this is a little different. The signal operation does nothing if the queue is empty and the wait operation put always the thread in the waiting queue. The process queue is served in a first come, first served mode. When a thread wakes up after waiting on a condition, it must reacquire the lock before continuing in the code.
Before going further, we must have more informations about the signal operations. When writing monitors, you normally have the choice between several philosophies for the signaling operation :
  1. Signal & Continue (SC) : The process who signal keep the mutual exclusion and the signaled will be awaken but need to acquire the mutual exclusion before going.
  2. Signal & Wait (SW) : The signaler is blocked and must wait for mutual exclusion to continue and the signaled thread is directly awaken and can start continue its operations.
  3. Signal & Urgent Wait (SU) : Like SW but the signaler thread has the guarantee than it would go just after the signaled thread
  4. Signal & Exit (SX) : The signaler exits from the method directly after the signal and the signaled thread can start directly. This philosophy is not often used.
The available policies depends on the programming language, in Java, there is only one policy available, the SC one.
In Java there is no keyword to directly create a monitor. To implement a monitor, you must create a new class and use Lock and Condition classes. Lock is the interface is ReentrantLock is the main used implementation, this is the one that we’ll learn to use in the current post. To create a ReentrantLock, you have two constructors, a default constructor and a constructor with a boolean argument indicating if the lock is fair or not. A fair lock indicates that the threads will acquire the locks in the order they ask for. Fairness is a little heavier than default locking strategies, so use it only if you need it. To acquire the lock, you just have to use the method lock and unlock to release it.
The explicit locks have the same memory semantics than the synchronized blocks. So the visibility of the changes is guarantee when you use lock()/unlock() blocks.
So to implement, the monitor example we’ve seen before, we just need to create a class and use the lock to make the mutual exclusion :

public class SimpleMonitor {
private final Lock lock = new ReentrantLock();

public void testA() {
lock.lock();

try {
//Some code
} finally {
lock.unlock();
}
}

public int testB() {
lock.lock();

try {
return 1;
} finally {
lock.unlock();
}
}
}

The person who’ve already read the other parts of this post set will say that it will be easier to use the synchronized keyword on the two methods. But with synchronized, we will not have the condition variables. If you don’t need condition variables but only locking, it will be easier to use the synchronized blocks instead of Locks.

You can create conditions using the newCondition method on the lock. A condition is a variable of type Condition. You can make the current thread wait on the condition using the await method (and its variant with timeout) and you can signal threads using signal and signalAll methods. The signalAll methods wakes up all the threads waiting on the condition variable.
Let’s try with a simple common example : A bounded buffer. It’s a cyclic buffer with a certain capacity with a start and an end.

import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

public class BoundedBuffer {
private final String[] buffer;
private final int capacity;

private int front;
private int rear;
private int count;

private final Lock lock = new ReentrantLock();

private final Condition notFull = lock.newCondition();
private final Condition notEmpty = lock.newCondition();

public BoundedBuffer(int capacity) {
super();

this.capacity = capacity;

buffer = new String[capacity];
}

public void deposit(String data) throws InterruptedException {
lock.lock();

try {
while (count == capacity) {
notFull.await();
}

buffer[rear] = data;
rear = (rear + 1) % capacity;
count++;

notEmpty.signal();
} finally {
lock.unlock();
}
}

public String fetch() throws InterruptedException {
lock.lock();

try {
while (count == 0) {
notEmpty.await();
}

String result = buffer[front];
front = (front + 1) % capacity;
count--;

notFull.signal();

return result;
} finally {
lock.unlock();
}
}
}


So some explications :
  1. The two methods are protected with the lock to ensure mutual exclusion
  2. Then we use two conditions variables. One to wait for the buffer to be not empty and an other one to wait for the buffer to be not full.
  3. You can see that I have wrapped the await operation on a while loop. This is to avoid signal stealers problem that can occurs when using Signal & Continue
And that BoundedBuffer can be easily used with several threads with no problems.
As you can see, you can use monitors to solve a lot of concurrent programming problems and this mechanism is really powerful and performing.
I hope you find this post useful.


Wednesday, 22 June 2011

Concurrent hashmap in java

ConcurrentHashMap extends the abstract class AbstractMap and implements the ConcurrentMap interface. This class follows the operational specification and the similar functional specification as of Hashtable, for updating a hash table supports the fully concurrency of the recoveries and adjustable expected concurrency. This class allows the variants of methods correspondence to each method of Hashtable and also doesn't permit the 'null' for the 'key' or 'value'. All the operations of this class are thread-safe. Use of ConcurrentHashMap increases the performance because it permits the multiple threads to modify the map concurrently without require to block them however performance may be poor if the single threads access the map at a time.

Syntax

public class ConcurrentHashMap<K,V> extends AbstractMap<K,V> implements ConcurrentMap<K,V>

Constructor's of ConcurrentHashMap

This class provides constructor through which we can create map according to our requirement :
ConcurrentHashMap() : This constructor makes a new vacate map of default size (16), default load factor (0.75) and concurrencyLevel (16).
ConcurrentHashMap(int initialCapacity) : This constructor makes a new vacate map of capacity defined at time of instantiation according to requirement with default load factor (0.75) and concurrencyLevel (16).
ConcurrentHashMap(int initialCapacity, float loadFactor) : This constructor makes a new vacate map of capacity and load factor defined at time of instantiation according to requirement and with default concurrencyLevel (16).
ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) : This constructor makes a new vacate map of capacity, load factor, and with concurrencyLevel defined at the time of instantiation according to requirement.
ConcurrentHashMap(Map<? extends K,? extends V> m) : This constructor makes the similar mappings according to the map given.

Methods of ConcurrentHashMap

This class provides methods some of the commonly used are :
  1. put()
  2.                         syntax : public V put(K key,V value)
  3. clear()
  4.                         syntax : public void clear()
  5. elements()
  6.                         syntax : public Enumeration elements()
  7. remove(Object key, Object value)
  8.                         syntax : public boolean remove(Object key,Object value)
  9. replace(K key, V value)
  10.                         syntax : public V replace(K key,V value)
  11. values()
  12.                         syntax : public Collection values()
  13. size()
  14.                         syntax : public int size()
Example :

import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.ConcurrentHashMap;
import java.util.Set;
class A implements Runnable {
String name;
ConcurrentMap cm;
public A(ConcurrentMap cm, String name) {
this.name = name;
this.cm = cm;
}
public void run() {
try {
cm.put(1, "A");
cm.put(2, "B");
cm.put(3, "C");
cm.put(4, "D");
cm.put(5, "E");
System.out.println(name + " maps the element : " + cm);
System.out.println(name + " represents the set of keys: "
+ cm.keySet());
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
class B implements Runnable {
String name;
ConcurrentMap cm;
public B(ConcurrentMap cm, String name) {
this.name = name;
this.cm = cm;
}
public void run() {
try {
boolean j = cm.remove(3, "C");
System.out.println(name + " removes the element : " + j);
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
class C implements Runnable {
String name;
ConcurrentMap cm;
public C(ConcurrentMap cm, String name) {
this.name = name;
this.cm = cm;
}
public void run() {
try {
Set s = cm.keySet();
System.out.println(name + " represents
the set of keys : " + s);
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public class ConcurrentMapDemo {
public static void main(String[] args) {
ConcurrentMap cm =
new ConcurrentHashMap();
Runnable a = new A(cm, "A");
Runnable b = new B(cm, "B");
Runnable c = new C(cm, "C");
new Thread(a).start();
try {
Thread.sleep(300);
} catch (InterruptedException e) {
e.printStackTrace();
}
new Thread(b).start();
try {
Thread.sleep(300);
} catch (InterruptedException e) {
e.printStackTrace();
}
new Thread(c).start();
try {
Thread.sleep(300);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}

Output
A maps the element : {5=E, 4=D, 3=C, 2=B, 1=A}

A represents the set of keys: [5, 4, 3, 2, 1]

B removes the element : true

C represents the set of keys : [5, 4, 2, 1]



ConcurrentHashSet in Java from ConcurrentHashMap

While you do have a ConcurrentHashMap class in Java, there is no ConcurrentHashSet.
Solution
You can easily get a ConcurrentHashSet with the following code -

Collections.newSetFromMap(new ConcurrentHashMap<Object,Boolean>())

Notes
  • A Set lends itself to implementation via a Map if you think about it. So can actually just use a Map. But that may not fit in well with the context of your use.
  • The HashSet class internally uses a HashMap.
  • The ConcurrentHashSet obtained via the method inherits pretty much all the concurrency features of the underlying collection.
  • See api-docs for newSetFromMap

Blocking queues in java

The key facilities that BlockingQueue provides to such systems are, as its name implies, enqueuing and dequeueing methods that do not return until they have executed successfully. So, for example, a print server does not need to constantly poll the queue to discover whether any print jobs are waiting; it need only call the poll method, supplying a timeout, and the system will suspend it until either a queue element becomes available or the timeout expires. BlockingQueue defines seven new methods, in three groups:


Group1 : Adding an Element

boolean offer(E e, long timeout, TimeUnit unit)
// insert e, waiting up to the timeout
void put(E e) // add e, waiting as long as necessary

The nonblocking overload of offer defined in Queue will return false if it cannot immediately insert the element. This new overload waits for a time specified using java.util.concurrent.TimeUnit, an Enum which allows timeouts to be defined in units such as milliseconds or seconds.
Taking these methods together with those inherited from Queue, there are four ways in which the methods for adding elements to a BlockingQueue can behave: offer returns false if it does not succeed immediately, blocking offer returns false if it does not succeed within its timeout, add throws an exception if it does not succeed immediately, and put blocks until it succeeds.

Group2  : Removing an Element
E poll(long timeout, TimeUnit unit)
// retrieve and remove the head, waiting up to the timeout
E take() // retrieve and remove the head of this queue, waiting
// as long as necessary

Again taking these methods together with those inherited from Queue, there are four ways in which the methods for removing elements from a BlockingQueue can behave: poll returns null if it does not succeed immediately, blocking poll returns null if it does not succeed within its timeout, remove throws an exception if it does not succeed immediately, and take blocks until it succeeds.

Group 3 : Retrieving or Querying the Contents of the Queue
int drainTo(Collection<? super E> c)
// clear the queue into c
int drainTo(Collection<? super E> c, int maxElements)
// clear at most the specified number of elements into c
int remainingCapacity()
// return the number of elements that would be accepted
// without blocking, or Integer.MAX_VALUE if unbounded


The drainTo  methods perform atomically and efficiently, so the second overload is useful in situations in which you know that you have processing capability available immediately for a certain number of elements, and the first is useful for example when all producer threads have stopped working. Their return value is the number of elements transferred. RemainingCapacity reports the spare capacity of the queue, although as with any such value in multi-threaded contexts, the result of a call should not be used as part of a test-then-act sequence; between the test (the call of remainingCapacity) and the action (adding an element to the queue) of one thread, another thread might have intervened to add or remove elements.
BlockingQueue guarantees that the queue operations of its implementations will be thread-safe and atomic.
But this guarantee doesn't extend to the bulk operations inherited from CollectionaddAll, containsAll, retainAll and removeAllunless the individual implementation provides it. So it is possible, for example, for addAll to fail, throwing an exception, after adding only some of the elements in a collection.

Blocking queue has the following characteristics:
  • methods to add an item to the queue, waiting for space to become available in the queue if necessary;
  • corresponding methods that take an item from the queue, waiting for an item to put in the queue if it is empty;
  • optional time limits and interruptibility on the latter calls;
  • efficient thread-safety: blocking queues are specifically designed to have their put() method called from one thread and the take() method from another— in particular, items posted to the queue will be published correctly to any other thread taking the item from the queue again; significantly, the implementations generally achieve this without locking the entire queue, making them highly concurrent components;
  • integration with Java thread pools: a flavour of blocking queue can be passed into the constructor of ThreadPoolExecutor to customise the behaviour of the thread pool.
Implementations of blocking queue
ArrayBlockingQueue : A simple bounded BloickingQueue implementation backed by an array.

DelayQueue : An unbounded blocking queue of Delayed elements, in which an element can only be taken when its delay has expired.It uses elements that implement the new java.util.concurrent.Delayed interface.

PriorityBlockingQueue : This queue bases ordering on a specified Comparator, and the element returned by any take( ) call is the smallest element based on this ordering.

LinkedBlockingQueue : A simple bounded BloickingQueue implementation backed by a linked list.

SynchronousQueue : This queue has a size of zero (yes, you read that correctly). It blocks put( ) calls until another thread calls take( ), and blocks take( ) calls until another thread calls put( ). Essentially, elements can only go directly from a producer to a consumer, and nothing is stored in the queue itself (other than for transition purposes).


Example - Producer consumer problem with Blocking queue
The queue takes care of all the details of synchronizing access to its contents and notifying other threads of the availability of data.

Producer.java
public class Producer extends Thread {
private BlockingQueue cubbyhole;
private int number;

public Producer(BlockingQueue c, int num) {
cubbyhole = c;
number = num;
}

public void run() {
for (int i = 0; i < 10; i++) {
try {
cubbyhole.put(i);
System.out.format("Producer #%d put: %d%n", number, i);
sleep((int)(Math.random() * 100));
} catch (InterruptedException e) { }
}
}
}

Consumer.java

import java.util.concurrent.*;
public class Consumer extends Thread {
private BlockingQueue<Integer> cubbyhole;
private int number;

public Consumer(BlockingQueue<Integer> c, int num) {
cubbyhole = c;
number = num;
}

public void run() {
int value = 0;
for (int i = 0; i < 10; i++) {
try {
value = cubbyhole.take();
System.out.format("Consumer #%d got: %d%n", number, value);
} catch (InterruptedException e) { }
}
}
}


ProducerConsumerTest.java

public class ProducerConsumerTest {
public static void main(String[] args) {

ArrayBlockingQueue c = new ArrayBlockingQueue(1);
Producer p1 = new Producer(c, 1);
Consumer c1 = new Consumer(c, 1);

p1.start();
c1.start();
}
}

Possible Use cases for BlockingQueue

These features make BlockingQueues useful for cases such as the following:
  • a server, where incoming connections are placed on a queue, and a pool of threads picks them up as those threads become free;
  • in a variety of parallel processes, where we want to manage or limit resource usage at different stages of the process.

invokeAll via ExecutorService

Syntax of this method is like this(in java 6):
<T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks) 
throws InterruptedException

In traditional Java – If we have to release multiple threads- we have to create Thread objects and call
Start method one by one.
In Java 5.0 and above – If we can put all callable objects in a collection object and pass the collection objects to ExecutorService to release.

The invokeAll() method invokes all of the Callable objects you pass to it in the collection passed as parameter. The invokeAll() returns a list of Future objects via which you can obtain the results of the executions of each Callable. invokeAll is a blocking method. It means – JVM won’t proceed to next line until all the threads are complete.


Keep in mind that a task might finish due to an exception, so it may not have "succeeded". There is no way on a Future to tell the difference.

Example:
ExecutorService executorService = Executors.newFixedThreadPool();

List<Callable<String>> callables = new ArrayList<Callable<String>>();

callables.add(new Callable<String>() {
public String call() throws Exception {
return "Task 1";
}
});
callables.add(new Callable<String>() {
public String call() throws Exception {
return "Task 2";
}
});
callables.add(new Callable<String>() {
public String call() throws Exception {
return "Task 3";
}
});

List<Future<String>> futures = executorService.invokeAll(callables);

for(Future<String> future : futures){
System.out.println("future.get = " + future.get());
}

executorService.shutdown();


Some good books on java concurrency

Java Concurrency in practise 
by Brian Goetz and others














Concurrent Programming in java
 Doug Lea

(Though old now, it is still good. The author is God of Concurrency in java but this book is written by a genius but not so good author)










Art of multiprocessing programming
- Maurice Herlihy












Effective Java 
  by Joshua Bloch

Though this book is mainly about good practices in java, still it has some good practices which will help you in writing concurrent code.


ArrayBlockingQueue

An ArrayBlockingQueue is a queue returned by an array that have a limitation. It is a "bounded buffer" in which elements are helds in a constant size array. Once the capacity of this queue is defined it can not be grew up, if you will try to put element into the full queue will result in a blocking wait and similarly, obtain the element from vacate queue will block. In this queue the elements are ordered in FIFO (First-In-First-Out). In this queue the element that has been the highest time on the queue is the head element, and the element that has been the minimum time on the queue is the tail element of the queue. Insertion of an element in this queue is happened at tail and the element is retrieved from the head position.
All of the optional methods of the Collection and Iterator interfaces are implemented by this class and its iterator.

Syntax

public class ArrayBlockingQueue<E>
Parameter description
E : It is the element's type that's held in this collection.
Constructor of ArrayBlockingQueue class are :
  • ArrayBlockingQueue(int capacity) : With the constant capability and default way of accessing this constructor makes an ArrayBlockingQueue.
  • ArrayBlockingQueue(int capacity, boolean fair) : With the constant capability and the specified way of accessing this constructor makes an ArrayBlockingQueue.
  • ArrayBlockingQueue(int capacity, boolean fair, Collection<? extends E> c) : With the constant capability, the specified way of accessing and the elements of the given collection this constructor makes an ArrayBlockingQueue, that are appended in collection's iterator traversing order.
Example :
Here we give a simple example which will illustrate you to how can you use the methods of ArrayBlockingQueue class.


public class ArrayBlockingQueueDemo {
public static void main(String args[]) {
ArrayBlockingQueue abq = new ArrayBlockingQueue(10);
abq.add(1);
abq.add(2);
abq.add(3);
abq.add(4);
abq.add(5);
System.out.println("Elements of queue1= " + abq);
ArrayBlockingQueue abq1 = new ArrayBlockingQueue(10);
abq1.offer("A");
abq1.offer("B");
abq1.offer("C");
abq1.offer("D");
abq1.offer("E");
abq1.offer("F");
System.out.println("Elements of queue2 = " + abq1);
int i = abq.drainTo(abq1, 4);
System.out.println("Now elements of queue2 = " + abq1);
System.out.println("Rest element of queue1 = " + abq);
Iterator it = abq1.iterator();
System.out.println("Elements of queue2 using iterator = ");
while (it.hasNext()) {
System.out.println(it.next());
}
Object obj = abq1.peek();
System.out.println("The head element of queue2 = " + obj);
Object obj1 = abq1.poll();
System.out.println("Elements of queue2 = " + abq1);
System.out.println("The removed head element = " + obj1);
int i1 = abq1.size();
System.out.println("Size of queue2 = " + i1);
int i2 = abq.size();
System.out.println("Size of queue1 = " + i2);
}
}

Output:
Elements of queue1= [1, 2, 3, 4, 5]

Elements of queue2 = [A, B, C, D, E, F]

Now elements of queue2 = [A, B, C, D, E, F, 1, 2, 3, 4]

Rest element of queue1 = [5]

Elements of queue2 using iterator =

A

B

C

D

E

F

1

2

3

4

The head element of queue2 = A

Elements of queue2 = [B, C, D, E, F, 1, 2, 3, 4]

The removed head element = A

Size of queue2 = 9

Size of queue1 = 1



Example 2 - Producer Consumer Problem

Lets look at the example:
This example has three components,
1. Producer Thread – This thread starts adding the data in to the Queue
2. Consumer Thread – This thread gets the data from the Queue whenever any data is added by the Producer.
3. Blocking Queue – This acts as an intermediate between the Producer and the Consumer thread. It gets the data or object from the producer thread and hands over to the consumer thread. 
ExecutorQueue.java
Let’s create an ExecutorQueue class which has the ArrayBlockingQueue object. Any data or object can be added and retrieved from the Array blocking Queue instance.
public class ExecutorQueue
{
public static BlockingQueue queue = new ArrayBlockingQueue(100);

/**
* Method to add Data in to the Queue
*/
public static void addDataInQueue(Object obj)
{
queue.add(obj);
}

/**
* Get the Data from the Queue
*/
public static Object getDataFromQueue() throws InterruptedException
{
return queue.take();
}
}



ConsumerThread.java
The below consumer Thread will wait in the Array blocking queue and retrieves the data as and when any object is added in the queue.

Note: In the Data Processing section, you can write your custom processing logic as per your requirements
public class ConsumerThread extends Thread
{
public void run()
{
System.out.println("\nConsumerThread started...");
boolean loop = true;
while (loop)
{
try
{
System.out.println("\nConsumerThread: Waiting to fetch data from Queue...");
String data = (String) ExecutorQueue.getDataFromQueue();
System.out.println("ConsumerThread: Got the data from Queue; Object = " + data);
System.out.println("ConsumerThread: Processing the data (" + data +")");

/*
* Data Processing section:
*
* Note: Write your processing logic here based on the data retrieved
*/
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
}

Main method – ProducerApplication.java
The Producer is the triggering point to test Array blocking queue. It does the following operations,
1. It takes care of starting the Consumer Thread so that it will wait for any data in the Queue.
2. Soon after starting the Consumer thread, it will start adding data in to the blocking queue. Once the data is added, the Consumer thread will start retrieving the data using the queue.take() method.


public class ProducerApplication
{
public static void main(String[] args)
{

// Start the Processor thread so that it will wait for an Object in the Blocking Queue. As soon as an Object is
// added in the Queue, it will take and process the data
System.out.println("ProducerApplication: Starting the Processor Thread...\n");
ConsumerThread thread = new ConsumerThread();
thread.start();

ExecutorQueue.addDataInQueue("Object-1");
ExecutorQueue.addDataInQueue("Object-2");
}
}

Output

ProducerApplication: Starting the Processor Thread...


ConsumerThread started...

ConsumerThread: Waiting to fetch data from Queue...
ConsumerThread: Got the data from Queue; Object = Object-1
ConsumerThread: Processing the data (Object-1)

ConsumerThread: Waiting to fetch data from Queue...
ConsumerThread: Got the data from Queue; Object = Object-2
ConsumerThread: Processing the data (Object-2)

ConsumerThread: Waiting to fetch data from Queue...

Download the source

Source code can be downloaded from here.