Latest Entries »

Scope of Article

In this tutorial we’ll step outside of the IDE and into the physical world for a lesson in creating “Network-inhibiting shoeboxes” or magic boxes as I like to call them. The aim of doing this is to create an environment in which we can instantly simulate the loss or lack of a wireless/mobile network.

I first learned this technique while working on a project for a client whose MIDlet needed to download hefty data files just following installation and we needed to ensure that a network drop like one experienced as the result of entering a train tunnel during download would not cause the MIDlet to freeze up, or worse crash.

By making and using your own magic box you’ll be able to make your MIDlets equally robust.

Materials

  • One box of aluminum foil
  • An empty shoebox
  • Scotch tape
  • Scissors
  • Patience
  •  

    Steps

    Ensuring the shoebox is big enough to fit your mobile device.

    If you have a flip phone make sure it fits in the shoebox while flipped open. It is also advisable to choose a shoebox whose lid is completely removable and not joined to the box in any way.

    Preparing the shielding.

    Cut the aluminum foil into strips roughly 10cm/4″ in width. Cut enough of these strips to cover both the inside and outside of the box with anywhere from 4 to 10 layers. My boxes often require 5 layers to be effective. It’s only aluminum after all.

    The lining.

    Tape the strips one by one both inside and outside the box. When you feel the box is sufficiently layered in aluminum foil finish by applying extra tape to the lid edge, and to the top of the box where the lid comes into contact with it. This is where the most wear and tear will occur and extra tape will ensure the longevity of your carefully crafted box.

    Testing

    Depending on the power of your mobile device and the distance and strength of the carrier’s signal you may require more layers of aluminum. You can determine this by inserting the mobile device into the box and closing it for up to one minute. Many devices do not report the true status of the network signal at all times and instead show snapshots at specific intervals. Keep this in mind when testing.

    Usage

    You’ve finished your magic box and want to put it into action. There are many tests you can conduct against your MIDlet such as determining how it will react in the absense of a network or suddenly going from a connected to an unconnected state. Lastly you may wish to see if your MIDlet can gracefully resume a previously severed connection.

    Leaving the lid of your box off, make your MIDlet ready to establish a connection at a single button press and place it into the box.

    With the lid half ajar make the MIDlet establish the connection and quickly close the box. Wait one or more minutes then reopen the box. What happened? Is smoke billowing from your mobile device or did it keep its cool and is awaiting its next command?

    Using metal tins

    You might be wondering if a metal tin or cookie box could suffice in lieu of a cardboard one lined with aluminum.

    It may, but in my experience lids for such tins aren’t typically easy to open and close (this is intentional as to keep cookies from becoming soggy.) If you do find
    a metal tin that is easy to open however and wish to use it, it is advisable to at least provide some cushioning at the bottom to prevent scratching of your mobile device.

    In closing

    Is your magic box working like magic for you? I welcome tips and improvements you might have run across so do let me know of your successes (and failures).

    Scope of this article.

    The intent of this article is to give a brief overview of how memory management works in Java and to touch on its effects with regards to developing J2ME MIDlets.

    Most java programmers are aware of the System.gc() and/or Runtime.gc() methods offered by the Java API. However many programmers have only a partial grasp of the effects of calling either of these methods. Readers of this document will walk away with a more in-depth understanding of garbage collection and memory management in general.

    Introduction to Garbage collection.

    Garbage collection is a service that can be provided by the Java Virtual Machine to reclaim memory that is deemed eligible for reclamation. All editions of Java (J2ME, J2SE, and J2EE) are capable of providing this service. Unlike C++ there is no way to manually delete allocated memory. The application must leave memory management in the capable hands of the JVM. For this reason there is no concept of a pointer in Java. A pointer would allow access to memory arbitrarily and this would introduce possible issues for the garbage collector and other aspects of the JVM’s operation.

    What happens when System.gc() is called?

    There is no definitive answer to this question. The JVM is NOT even required to perform garbage collection services. That’s right. If you wanted to implement your own JVM you could make it a spec-compliant one without collecting any garbage at all, although it wouldn’t work for long periods.

    It is very important to realize that calling this method is merely a suggestion to the JVM to perform garbage collection. Even if it supports garbage collection it may choose not to perform it. Also the JVM may decide to perform garbage collection on its own without the application calling System.gc() if it deems such an action necessary (such as dwindling free memory).

    Assuming garbage collection does occur, how does it work?

    Again, there is no definitive answer for this question. It may take a “mark and sweep” approach or other. This is up to the JVM vendor to decide. The BEA JRockit VM, for example, exposes various GC approaches to the administrator and he or she decides which to employ.

    What about garbage collection IS definitive?

    Only one thing is known for sure. In order for an object to be eligible for garbage collection, no live thread can have any way to access it. This can be accomplished in a few different ways:

  • A thread dies, taking the sole reference of the object with it to its grave.
  • All live threads nullify their references to an object.
  • All references to an object are isolated. (Only A points to B and B to A so there is no way to access either anymore)
  • How the JVM knows that an object has live references is, however, unknown. It might maintain counters or might have lookup tables or other. It is enough that the JVM knows which objects are and are not eligible for garbage collection.

    Ensuring an object’s eligibility for garbage collection.

    Following along the example above of what constitutes a live reference a good approach to avoid memory leaks is to use the singleton pattern for having a “reference manager”. Since only it maintains single references to all objects only one reference need be nullified.

    Another approach that a reader pointed out (and was added above) was to kill the thread which is the sole maintainer of references to particular objects. Once the thread is killed the references die with it, making the objects automatically eligible for garbage collection without needing to nullify each one individually.

    Preventing an object from being eligible for garbage collection.

    While counter-productive to the concept of managing memory it is possible to temporarily thwart the garbage collector’s attempt to reclaim the memory allocated to an object. In the class’ finalize method you can assign an outside reference to the object before it is collected.


    public void finalize() {
        OutsideClass.referenceToMe = this;
    }

    This will only work the first time the garbage collector attempts to reclaim the object since the finalize method is only called once.

    Memory management and its implications for J2ME MIDlets.

    When developing applications targeting the J2SE platform memory optimization typically takes a backseat to more pressing development concerns. In the J2ME world this approach will quickly run a MIDlet into the ground. J2ME-enabled devices, in a bid to remain affordable, are slim on memory. Depending on the device there may be as little as a few hundred kilobytes of accessible heap space. For this reason solid memory managemement techniques are crucial to a MIDlet’s robustness. These techniques can also be applied, without any additional effort, to benefit J2SE and J2EE applications.

    How do I deal with OutOfMemoryErrors?

    Ideally you don’t or at least shouldn’t. Errors, as opposed to Exceptions are not meant to be caught by your application. The fact that this error occurs is a sign that you are trying to allocate more memory than is available as a single chunk. This usually occurs when you attempt to load a large image after loading many smaller ones. The smaller images used up half the heap and the large one can’t be provided a large enough heap chunk in which to fit. To overcome this, call System.gc() more often (even if its performance is not guaranteed) and avoid loading such large resources into memory. This way you shouldn’t need to catch the error.

    Object reuse

    In many cases an application instantiates an object, uses it, then discards it before instantiating another such object. This is not only inefficient with regards to time but can lead to accelerated memory fragmentation (unfortunately there isn’t a System.defragHeap() method) and is more prone to failure on devices with buggy garbage collection implementations.

    To alleviate this issue lazy loading combined with the factory design pattern can be used. In this pattern an object factory is called upon to instantiate an object rather than manual instantiation via a constructor. The factory can, instead of always instantiating a new object, return an object which was previously marked as discarded (how this marking occurs may vary and is up to the developer). This approach will abstract the management of instances from the core MIDlet logic and will also enhance performance in the long run.

    Conclusion

    Call System.gc() but don’t forget it’s merely a suggestion that garbage collection take place and not a command to that effect.

    Try to reuse objects so that you can lessen your reliance on the already highly non-deterministic garbage collector.

    When possible load large resources ahead of smaller ones to minimize the potential effects of memory fragmentation.

    Scope of this article.

    The intent of this article is to give a brief overview of how multithreading works in J2ME and to address some common misconceptions regarding the subject.

    Introduction

    In contrast to many existing languages, Java, from day one, offered threading support not only in the standard API but as an integral part of the VM. However the truth is that Java itself does not offer any threading functionality. It merely abstracts the functionality found in the host OS. As a result, the behavior of threading in Java is said to be “non-deterministic”. In other words, Java makes no guarantees as to how multithreading will be accomplished as each OS can have its own concept of what a thread is and does.

    How does multithreading work?

    For the reasons outlined above, it is not possible to say with certainty how multithreading works on a J2ME-enabled device. However one can make an educated guess that the device has a single CPU and that it has a single-core. The OS may be completely unknown and since most MIDlets are targeted towards more than a single device, catering to one may lead to compatibility issues for other devices. Having said this, the typical OS takes one of the following multithreading approaches…

    1. Preemptive Multithreading

    The reader is probably familiar with this term as Windows and MacOS X are both preemptive multithreading OSes. In this approach the CPU allots a time slice to each thread. When the time has elapsed, the executing threads state is saved and the next thread’s state is loaded before execution.

    2. Cooperative Multithreading

    In this approach each thread is tasked with yielding the CPU over to other threads when it sees fit to do so, usually when its own job is complete. Cooperative multitasking is more common on older OSes and on portable devices (like some J2ME-enabled devices).

    In both approaches it will appear that many threads are running simultaneously when in fact only one thread is ever running at a time.

    The ramifications of multithreading

    Regardless of the multithreading approach used by the OS one fact remains, thread contexts have to be swapped in and out of the CPU. What is a thread context? Put simply, it is a snapshot of the CPU registers just prior to thread switching. When thread B is brought in to execute it doesn’t want to have values from thread A to litter the registers. Thread B needs the values from its snapshot (the one taken just prior to thread B’s swap out).

    An example of what the CPU does is provided.


    ...
    Thread B executes
    Thread B snapshot taken
    Thread A snapshot inserted (context switch!)
    Thread A executes
    Thread A snapshot taken
    Thread B snapshot inserted (context switch!)
    Thread B executes
    ...

    This way each thread acts under the impression that no other thread has executed. This is great but how many of these steps in the trivial example above were dedicated to thread execution and how many were just snapshot maintenance? Right, there were 3 execution steps and 4 snapshot maintenance steps. While thread context swapping is straightforward and won’t bring your CPU to a grinding halt it also does not contribute to the application’s progress and is really just “dead” time.

    This brings me to the first common misconception some people have regarding multithreading performance. If two threads are executing at the same time (remember they only appear to be running simultaneously) they should complete their tasks faster than if each thread were to execute in sequence right? Not necessarily. Let’s not forget about the thread context switching which must take place in order to provide the illusion to each thread that it is the only one running. It just adds work for the CPU to do that wasn’t beneficial to the application.

    So to sum it up multithreading slows your application’s performance.

    If multithreading is so bad, why is it available?

    The reason it is available is that there is an appropriate time for its use. If you only remember one part of this article let it be this…

    Multithreading should be used where two or more unrelated events must occur simultaneously or in overlap.

    The keyword of that sentence being “unrelated”. Think of the browser you are using to read this for a moment. When you tell it to print this page it spawns a new thread and asks it to carry out the print job and die. If this weren’t the case, if the print job occurred in the main thread, you would have to wait for the printing to finish before jumping to the next page or even scrolling down on this one. And what if the print job hung for some reason? Perhaps the printer is receiving the job but for some reason isn’t informing your computer as to its progress. What a great user experience that would be. Well thankfully that isn’t how it is and your MIDlets should benefit from this lesson too.

    When you need to animate a progress gauge or icon to reflect the state of a transpiring event it is appropriate to use a thread. The animation’s logic is in no way related to that of the long event (their execution timing is but not the tasks that they perform) and they operate in overlap. Note, that depending on the OS thread scheduler there is no guarantee that the device can reliably animate the progress indicator if the long event is allowed to hog the CPU.

    A common pitfall of this “unrelated” clause is when someone gives a unique thread to each of their game’s objects since they all move independently of one another. Let’s not forget the last part of that statement, being “… must occur simultaneously or in overlap”. Two or more game objects needn’t be managed simultaneously. They can be managed in sequence well ahead of the next render and therefore don’t qualify as being multithreading dependent. Giving each object its own thread will only serve to slow it down to the point of not being fun so please exercise restraint in this area.

    The threads that are already running

    Whether you know it or not your application is running its own “main” thread, all your key press/release events are likely being captured on another thread, and your painting operations can occur on yet another thread.

    Be certain that your application needs extra threads before implementing them!

    Threads trivia

    Before closing I’d like to share a compact list of facts you should know about threads…

    Setting the priority of a thread does not guarantee that the thread will have more or less of a priority over other threads. The priority number you assign may not even map to the OS’s priority system if it has one at all.

    Calling Thread.sleep(1000) will cause the thread to sleep at least 1000 milliseconds and perhaps more. The thread scheduler takes time to context switch threads. The main takeaway here is to not count on the sleep method being an accurate time keeper if your logic requires precision.

    Each thread has its own call stack and that means any thread can cause the application to crash if exceptions are not properly caught in each run() method.

    Synchronized code is said to run roughly 4 times slower than non-synchronized code. Avoiding multithreading reduces the chances of race conditions which can, in turn, negate the need for synchronized code.

    Conclusion

    Multithreading doesn’t necessarily make your application run faster and in many cases can even slow it down. How threads behave varies between devices and the emulator and there is no guarantee of how threads will be scheduled. There is an appropriate time to use threads but in most cases they are not needed and should be avoided.

    Threads add an extra dimension of “non-determinacy” to an already “non-deterministic” platform and debugging threading issues is extremely difficult even on desktop applications with proper tools.

    Links:
    Chet Haase’s articulate post on “Threadaches”

    Welcome to the J2ME DevCorner.

     

    My chicken scratch page (http://www.jay-f.jp/devcorner/) is getting its eviction notice! As my HTML coding skills are not progressing at the rate I would like I have decided to employ a regular weblog to serve as an archive for all the tutorials, articles, and contributions I have in store.

     

    The first bits of content I intend to publish are the articles regarding memory management and multithreading in J2ME that have received a great deal of reader feedback on the J2ME Forums.

    Thank you for your patience and let’s get started!