|Title||:||Exploiting Asymmetry in Computing|
|Details||:||Tue, 19 Apr, 2016 3:30 PM @ BSB 361|
|Abstract:||:||Researchers have demonstrated significant energy gains by reducing the supply voltage to electronic devices, the downside being that some operations can become erroneous. Building on these works, we present a perspective on computing that seeks to explore the trade-off between the amount of effort - typically energy one invests in a computation and the quality of the result. The essential idea is to recognize that some pieces of information processed by an algorithm may be more important than other pieces of information. Moreover, this asymmetry in the importance of information can be exploited because many applications are resilient to small amounts of error and can tolerate error in relatively unimportant parts of the data. We will first discuss several problems for which we can get significantly improve the energy consumption while ensuring that the error is bounded within acceptable limits.
Our main contribution is a computational model that classifies problems based on their potential for reducing the energy needed to solve them within acceptable error limits. For a vast number of the problems, there is significant asymmetry in the level of influence that each bit exerts on the value of the output. Consider, for example, adding two m bit numbers. The value of the most significant bit has far more influence on the final outcome than the value of the least significant bit. We show that for these problems, a more careful assignment of energy levels to each input bit position can significantly improve the quality-vs-energy trade-off. On the other hand, there are problems (like computing the parity of m bits) where every input bit has equal influence on the output. For these problems, any assignment of energy levels cannot significantly reduce error when compared with equal energy investment.