As humans – and our billions of devices – become increasingly dependent on digital systems, computers are running out of processing power to keep up.
Fortunately, writes Darshika Perera, assistant professor of electrical and computer engineering, there is a solution: customized and reconfigurable architectures that can bring next-generation edge-computing platforms up to speed.
Perera published the research in a feature story for the spring edition of the Institute of Electrical and Electronics Engineers Canadian Review. In it, she describes a key concern for those of us living in the Internet of Things era: as more systems process massive amounts of data “in the cloud,” the cloud is facing serious obstacles to keeping up. Poor response times, high power consumption and security issues are just a few of the challenges it faces when transmitting, processing and analyzing the enormous burden of global data.
Perera’s research, conducted with a team of ten graduate students in the Department of Electrical and Computer Engineering, highlights a twin path forward.
First, Perera writes, data processing must move away from its reliance on traditional cloud infrastructures and towards a complementary solution: edge computing.
Non-computer scientists can think of edge computing like a popular pizza restaurant opening new locations across town. A pizza that travels 20 miles from the restaurant will be cold by the time it reaches the customer. But a pizza cooked right down the street will arrive faster, and reduce the strain on the original restaurant’s kitchen.
Similarly, edge computing processes data nearer to its source – on phones, smart watches and personal computers – rather than farming the job out to the cloud. Perera writes that edge computing addresses nearly all of the challenges faced by cloud computing, from speed and bandwidth to security and privacy.
But edge computing is still in its infancy, and most of the edge-computing solutions currently being visualized rely on processor-based, software-only designs.
“At the edge, the processing power needed to analyze and process such enormous amount[s] of data will soon exceed the growth rate of Moore’s law,” Perera writes. “As a result, edge-computing frameworks and solutions, which currently solely consist of CPUs, will be inadequate to meet the required processing power.”
That’s where Perera’s second conclusion comes into play. To process and analyze ever-increasing amounts of data – and to handle associated problems along the way – the next generation of edge computing platforms needs to incorporate customized, reconfigurable architectures optimized for specific applications.
What does this mean? It means that computer processors will no longer perform one dedicated job. Instead, like shapeshifters, they will configure themselves to perform any computable task set before them.
The flexibility of these systems put them head and shoulders over general-purpose processors. Perera writes that reconfigurable computing systems, such as field-programmable gate arrays (FPGAs), are more flexible, durable, upgradable, compact and less expensive, as well as faster to produce for the market – all of which helps to support real-time data analysis.
As Perera envisions the future of edge computing, her analysis shows that multiple applications and tasks can be executed on a single FPGA by dynamically reconfiguring the hardware on chip from one application or task to another as needed.
These kinds of improvements from traditional computing processes, Perera writes, “will make next-generation edge-computing platforms smart and autonomous enough to seamlessly and independently process and analyze data in real time, with minimal or no human intervention.” In the future, they could allow technologies that rely on lightning-fast edge computing, like self-driving cars, to become ubiquitous – and they could enable technologies in the future that are unimaginable today.
One thing is for certain: they will make computing faster, more autonomous, and more adaptive than ever before.
Read Perera’s full article for IEEE Canadian Review online.
Darshika Perera is an assistant professor of electrical and computer engineering in the College of Engineering and Applied Sciences at the University of Colorado Colorado Springs. She has extensive experience in embedded systems, digital systems, data analytics and mining, hardware acceleration and dynamic reconfiguration and machine learning techniques. Her research is conducted with a team of graduate students in the Department of Electrical and Computer Engineering at UCCS. Learn more online.