Scope of this article.
The intent of this article is to give a brief overview of how multithreading works in J2ME and to address some common misconceptions regarding the subject.
Introduction
In contrast to many existing languages, Java, from day one, offered threading support not only in the standard API but as an integral part of the VM. However the truth is that Java itself does not offer any threading functionality. It merely abstracts the functionality found in the host OS. As a result, the behavior of threading in Java is said to be “non-deterministic”. In other words, Java makes no guarantees as to how multithreading will be accomplished as each OS can have its own concept of what a thread is and does.
How does multithreading work?
For the reasons outlined above, it is not possible to say with certainty how multithreading works on a J2ME-enabled device. However one can make an educated guess that the device has a single CPU and that it has a single-core. The OS may be completely unknown and since most MIDlets are targeted towards more than a single device, catering to one may lead to compatibility issues for other devices. Having said this, the typical OS takes one of the following multithreading approaches…
1. Preemptive Multithreading
The reader is probably familiar with this term as Windows and MacOS X are both preemptive multithreading OSes. In this approach the CPU allots a time slice to each thread. When the time has elapsed, the executing threads state is saved and the next thread’s state is loaded before execution.
2. Cooperative Multithreading
In this approach each thread is tasked with yielding the CPU over to other threads when it sees fit to do so, usually when its own job is complete. Cooperative multitasking is more common on older OSes and on portable devices (like some J2ME-enabled devices).
In both approaches it will appear that many threads are running simultaneously when in fact only one thread is ever running at a time.
The ramifications of multithreading
Regardless of the multithreading approach used by the OS one fact remains, thread contexts have to be swapped in and out of the CPU. What is a thread context? Put simply, it is a snapshot of the CPU registers just prior to thread switching. When thread B is brought in to execute it doesn’t want to have values from thread A to litter the registers. Thread B needs the values from its snapshot (the one taken just prior to thread B’s swap out).
An example of what the CPU does is provided.
...
Thread B executes
Thread B snapshot taken
Thread A snapshot inserted (context switch!)
Thread A executes
Thread A snapshot taken
Thread B snapshot inserted (context switch!)
Thread B executes
...
This way each thread acts under the impression that no other thread has executed. This is great but how many of these steps in the trivial example above were dedicated to thread execution and how many were just snapshot maintenance? Right, there were 3 execution steps and 4 snapshot maintenance steps. While thread context swapping is straightforward and won’t bring your CPU to a grinding halt it also does not contribute to the application’s progress and is really just “dead” time.
This brings me to the first common misconception some people have regarding multithreading performance. If two threads are executing at the same time (remember they only appear to be running simultaneously) they should complete their tasks faster than if each thread were to execute in sequence right? Not necessarily. Let’s not forget about the thread context switching which must take place in order to provide the illusion to each thread that it is the only one running. It just adds work for the CPU to do that wasn’t beneficial to the application.
So to sum it up multithreading slows your application’s performance.
If multithreading is so bad, why is it available?
The reason it is available is that there is an appropriate time for its use. If you only remember one part of this article let it be this…
Multithreading should be used where two or more unrelated events must occur simultaneously or in overlap.
The keyword of that sentence being “unrelated”. Think of the browser you are using to read this for a moment. When you tell it to print this page it spawns a new thread and asks it to carry out the print job and die. If this weren’t the case, if the print job occurred in the main thread, you would have to wait for the printing to finish before jumping to the next page or even scrolling down on this one. And what if the print job hung for some reason? Perhaps the printer is receiving the job but for some reason isn’t informing your computer as to its progress. What a great user experience that would be. Well thankfully that isn’t how it is and your MIDlets should benefit from this lesson too.
When you need to animate a progress gauge or icon to reflect the state of a transpiring event it is appropriate to use a thread. The animation’s logic is in no way related to that of the long event (their execution timing is but not the tasks that they perform) and they operate in overlap. Note, that depending on the OS thread scheduler there is no guarantee that the device can reliably animate the progress indicator if the long event is allowed to hog the CPU.
A common pitfall of this “unrelated” clause is when someone gives a unique thread to each of their game’s objects since they all move independently of one another. Let’s not forget the last part of that statement, being “… must occur simultaneously or in overlap”. Two or more game objects needn’t be managed simultaneously. They can be managed in sequence well ahead of the next render and therefore don’t qualify as being multithreading dependent. Giving each object its own thread will only serve to slow it down to the point of not being fun so please exercise restraint in this area.
The threads that are already running
Whether you know it or not your application is running its own “main” thread, all your key press/release events are likely being captured on another thread, and your painting operations can occur on yet another thread.
Be certain that your application needs extra threads before implementing them!
Threads trivia
Before closing I’d like to share a compact list of facts you should know about threads…
Setting the priority of a thread does not guarantee that the thread will have more or less of a priority over other threads. The priority number you assign may not even map to the OS’s priority system if it has one at all.
Calling Thread.sleep(1000) will cause the thread to sleep at least 1000 milliseconds and perhaps more. The thread scheduler takes time to context switch threads. The main takeaway here is to not count on the sleep method being an accurate time keeper if your logic requires precision.
Each thread has its own call stack and that means any thread can cause the application to crash if exceptions are not properly caught in each run() method.
Synchronized code is said to run roughly 4 times slower than non-synchronized code. Avoiding multithreading reduces the chances of race conditions which can, in turn, negate the need for synchronized code.
Conclusion
Multithreading doesn’t necessarily make your application run faster and in many cases can even slow it down. How threads behave varies between devices and the emulator and there is no guarantee of how threads will be scheduled. There is an appropriate time to use threads but in most cases they are not needed and should be avoided.
Threads add an extra dimension of “non-determinacy” to an already “non-deterministic” platform and debugging threading issues is extremely difficult even on desktop applications with proper tools.
Links:
Chet Haase’s articulate post on “Threadaches”