Deep Tech Trends: A new era of computer design

From Nature to Cities, from Humans to… last but not least: Machines! Our overview of the Hello Tomorrow Deep Tech Trends 2020 report is almost done. And it’s obviously no coincidence that these trends mirror the Global Summit program itself!

After reading all of them, you’ll be the most up to date attendee in the game. (Bonus point if you went ahead and bought the whole trends report!) There are only 2 things left to do now: read on this last one, and download your Swapcard app to start organising meetings with other attendees!

Based on over 5,000 worldwide applications that we received to our Global Startup Challenge this year, as well as drawing from several years of watching, shaping and fostering the Deep Tech ecosystem, we are in a unique position to preview how the future will be shaped. The examples below are all carefully-selected Deep Tech Pioneers from the 2019–2020 Global Challenge.

Rare are the objects that have become so ubiquitous to us. The computer is one of them. Today, our computers are largely based upon a single, universal design despite being used to solve a variety of very different problems. You use the same computer to look up your next holiday, stream the latest episode of your favourite series or work through some tough data on Excel. This is all about to change. Instead, we will build computers tailored to a class of applications and, depending on the requirements, they will likely differ in their fundamental architecture.

Computers will become increasingly task-optimised, allowing for a better design

The most prominent example, dominating the headlines from politics to science communities, is quantum computing. Discussed in more detail in the next section, it leverages aspects of quantum physics to compute very complex problems more efficiently, very different from today’s computers. However, it doesn’t mean that there aren’t other non-quantum, physical properties that can be leveraged to build more task-optimized, specialized computers.

But first a simple question: why do we need to develop radical, new computing solutions? Beyond the quest for more powerful computing, the widespread adoption of machine learning as well as IoT solutions, each having individual computing requirements, drives the creation of diverging computing platforms. Furthermore, the need to reduce computing-related energy consumption, particularly in data centres, incentivizes the development of energy-efficient solutions.

New computing processors, based on the current design, no longer increase overall performance, mostly because of the fundamental limits of the ever-shrinking silicon transistors. Moore’s Law might be indeed coming to an end. Instead of incremental improvements, we need to take a step back. This is what Aligned Carbon is doing, addressing the need for improved performance by integrating carbon nanotubes into processing chips. Monolithic 3D nano-electronics such as this will yield unparalleled semiconductor performance. A similar performance leap is expected by introducing Magnetic Random Access Memory (MRAM), an ultrafast, high-density storage medium. The main limitation in scaling this for wider use, is the presence of structural impurity in magnetic materials, which reduces the storage capacity.

Source: Spin-Ion Technology

Spin-Ion Technologies address these structural defects by use of light ion beam irradiation to treat the magnetic materials.These solutions will at first only be used in supercomputer setups, but with time they likely will become an integral part of many more computing platforms, as they can be implemented everywhere. In contrast, the following solutions address the shortcomings of very particular applications, and so are less likely to be universally incorporated.

Slow communication between memory and processing chips is one of the current system-level roadblocks for data-intensive applications. To leap beyond the memory bandwidth limits, and particularly enable applications such as machine learning and new realities (AR/VR), both MemComputing and UpMEM propose an entirely new circuit design. By integrating computing processors in memory, the previously limiting slow speed at which data could be moved between processor and memory is eliminated. Leveraging computational memory shortens the required computational time to solve a problem, while also decreasing the amount of storage necessary and energy used, offering a new design tailored to AI applications.

In an era of explosive data growth, it is crucial both from an economical and environmental point of view, for data centres to decrease their energy consumption. And we might be able to find the answer in light. Light allows us to compute at higher speeds whilst using a lot less power, thereby providing higher performance per unit power. Since optoelectronic processors need significantly more physical space, they are not in widespread use. However for data centres, saving energy is more important than taking up space, which enables optoelectronic processors to become the dominant hardware platform for this niche application.

LightSpeedAI Labs’ optoelectronic processors offer computation at the speed of light. A limiting factor of optoelectronic chips is their inability to host enough high-bandwidth lasers per chip in order to achieve high performance at low power. This is where Iris Light Technologies comes in, developing a solution to print hundreds of lasers directly onto chips using a nanomaterial “ink”, thus dramatically increasing data bandwidth capacity per chip.

Another computing paradigm with specific function and requirements is Edge Computing; the underlying concept of which is to bring computation and data storage physically closer to the data source, aka, the devices where the data is being gathered. This reduces latency and increases autonomy at the edge. This will become increasingly important with the introduction of more sophisticated edge devices, such as collaborative robots or autonomous drones which need to process information immediately in order to operate safely. Sending information to the cloud to be analyzed and then returned to the edge device would take too much time to guarantee adequate and quick enough reactions. Accordingly, the need for more computing and data storage resources rises, while the physical space and energy supply to accommodate those resources is limited by the size of the edge device.

Particularly for time-sensitive applications however, there is a growing need to make decisions without sending data to the cloud, when involving large data sets or operations that need to preserve data privacy and integrity. Synthara is addressing that challenge by developing a special processing chip that enables AI-based applications to run at very low-energy consumption, to accommodate for the limited performance characteristics of batteries powering current IoT devices.

Deep Dive: The quest for practical quantum computing

The advent of quantum computing is looming. The potential of this technology is often referred to as being ‘unlimited’, despite most current applications still looking like self-tests. Nevertheless, quantum computing is expected to make AI algorithms more powerful, batteries, industrial catalysts, and medicines more efficient — the latter based on better simulations of material behaviour and reactivity at a molecular level.

Credit: IBM Research Flickr

Quantum computers take advantage of two known physical phenomena, superposition and entanglement, to efficiently compute difficult problems. In contrast to bits in today’s computers which can be either 1 or 0, the so-called qubits in quantum computers can represent various possible combinations of 1 and 0 at the same time. This, admittedly pretty counterintuitive, feature is called superposition and originates from quantum theory. It basically allows quantum computers to calculate a vast number of potential solutions much more quickly. The second phenomenon of quantum entanglement describes the fact that the state of one qubit influences the state of another one in a predictable way, even if they are separated by very long distances. Consequently, adding qubits increases the computing power of quantum machines exponentially, whereas doubling bits in today’s computers results only in twice the processing power. These physical phenomena combined are the basis of the massive computing power of quantum computers.

 However, to control these phenomena, the quantum computer has to be cooled down to cryogenic temperatures, and any vibration must be avoided to ensure the state of superposition and entanglement is not disturbed. So, quantum computers probably won’t replace desktop computers any time soon! On a more molecular level, quantum computing faces two main challenges: increasing the time qubits remain coherent, and correcting the errors. Hence, to make quantum computing more practical and robust, the newest generation of solutions need to overcome these issues. Instead of achieving longer coherence time among qubits, IQM’s superconducting quantum processors supposedly allow the clock speed of the system to significantly increase, thus allowing more computational operations to be performed per unit of time. As quantum information is inherently fragile, meaning it is lost over time, increasing the speed of operation likely allows viable applications. IQM achieves faster clock speed by supposedly initializing qubits on a nanosecond timescale, compared to the several microseconds required for alternative protocols and using a novel multi-channel. The latter readout allows us to determine the qubit state faster and more accurately than the conventionally used dispersive readout scheme.

To conclude, quantum computing demonstrations will become more compelling and meaningful over the next few years. These first results will contribute to a more nuanced debate about what exactly the superior computing power will enable society to achieve, and whether it will usher us into an era of quantum computer-driven discovery.

Social Media Sharing