Energy Efficient Computing For A Long Battery Life - Emerging Model of Approximate Computing

A long battery life is one of the most important features of any mobile device; smart phones or tablets. Increasing battery endurance through smart, energy efficient processor design and more suitable  computing models is an area of intense activity.  One approach to this goal is the emerging model of Approximate Computing.

Trading precision for energy reduction through smart, disciplined and deterministic designs is what constitutes Approximate Computing.  It spans energy saving (though imprecise) hardware designs, energy efficient (though approximate) algorithms, programming languages to leverage new, energy aware, hardware and processor designs, etc..  

There are many scenarios in which loss of precision in results are not only acceptable but highly welcome when gain in energy is the bargain.  This imprecision tolerance stems from a number of sources such as the ability of our brain to fill in missing information, availability of redundant data making lossy algorithms work fine, etc..

For example, if the degree of brightness in one or two of the pixels in a 1000 by 800 image is not very precise, most of us will not even notice it.  Like wise if we are to find the average brightness level of the entire image, we will be summing up the brightness values of all the pixels and then will divide this sum by 800,000, which is the total number of pixels.  Once again if the brightness levels of a few pixels are not precisely summed up, the imprecision in the average brightness will be hardly noticeable.  In such applications we can not only live with imprecisions, but would rather love to have them for a longer battery life.

Processors are, however, precise by default.  This precision costs energy.  We have to maintain the voltage levels in the circuit precisely constant, although we can decrease it to save power and pay in terms of errors in our results.  We may choose to decrease the refresh rates of DRAMs.  This might cause a few bit flips resulting in some imprecisions but power will be saved.  We may choose to have a lesser precise Arithmetic Unit for the lower order bits, save energy and pay in terms of imprecision.

The key question is how much imprecision should we allow? The answer varies from application to application depending upon the ‘imprecision tolerance’ or the cushion to absorb mistakes that the application has got.  For example, the average brightness level of an image will be able to tolerate imprecise summing up of a few pixels brightness levels for a, say, 1000 by 800 image.

The big dilemma is that even an imprecise application has some precise parts related to it.  For example, the image processing application may tolerate some imprecisions in a few pixels brightness levels (may be caused because of using low refresh cycle DRAM storage), but if the counters keeping track of the number of rows and columns while the sum is being computed are imprecise, the results so obtained by missing a couple of rows or columns might not lie within the acceptable range of quality.

Researchers have worked on imprecise arithmetic units for low order bits, low DRAM refresh cycles for imprecision tolerant data, low voltages for imprecise computing, etc..

These hardware features need to be matched up by suitable software -  software with a much finer level of control.  The software should be capable of allocating precise storage (costly in terms of power because of DRAMs with a high refresh cycle) to precision sensitive data such as counter for row number / column number or the JPEG header.  At the same time the software should allocate imprecise storage (cheap in terms of power because of low refresh cycle DRAMs) to imprecision tolerant data such as pixel brightness values.  Likewise, the software should choose the precise hardware (precise arithmetic units, precisely maintained circuit voltage levels) for precise computation and vice versa.

The challenge of energy efficient computing thus gets translated to precision aware computing.  

This precision awareness or the finer grained precision control in storing and processing of data through smartly blending software and hardware is at the core of Approximate Computing.  

Recently EnerJ, basically an extension of the Java programming language has been prototyped at the University of Washington.  This language has introduced the annotation @Approx. If a data element has this annotation before its declaration, an imprecise storage will be allocated to it.  Without such an annotation, the data types declared remain implicitly precise.  To compute @Approx annotated variables, imprecise arithmetic units or reduced voltage levels can be used.  

This way a finer level of control becomes available for suitably selecting the precision scale in different parts of a single computation. Approximate Computing is getting ready for the main stream.

This energy aware, precision tuned hardware- software tango is the new show in the tech town expected not only to reach our smart phones and tablets but go beyond that to the big servers and to the clouds.  Sky is the limit!