Oriole Networks, a UK-based startup, has secured $22 million in Series A funding to further develop its innovative AI training technology. The company uses photonics (light-based systems) to connect AI chips, creating a “super-brain” that trains large language models (LLMs) 100 times faster while using a fraction of the energy consumed by traditional methods.
AI computing power is doubling every 100 days, and current AI systems are extremely energy-intensive. For example, a single ChatGPT query consumes 25 times more energy than a Google search, with implications for global energy grids and carbon emissions.
If successful, Oriole’s solution could set a new industry standard for energy-efficient AI training and reshape how large AI models are developed, helping AI labs and data centers meet the growing demand for computational power without exacerbating environmental challenges.
Google recently ceased claiming carbon neutrality, while Microsoft acknowledges that its AI development could challenge its own sustainability targets. Both companies are working towards renewable energy solutions, but their AI expansion risks increasing their overall carbon footprints.
In today’s briefing we explore the challenges around AI energy demands and the way forward for a more efficient, planet-friendly future for AI.
What happened
Oriole Networks, a spinout from University College London (UCL) was established in 2023 by a team of three individuals: Professor George Zervas, Alessandro Ottino, and Joshua Benjamin.
Their aim was to address the growing energy consumption challenges associated with artificial intelligence (AI) and large language models (LLMs).
The founders leveraged over two decades of research at UCL in photonics—a field that utilizes light for various applications—to create innovative solutions that significantly reduce the power required for AI computations.
In march 2024, Oriole Networks successfully raised £10 million (approximately $12.5 million) in seed funding.
This initial round of financing was co-led by several investors, including the UCL Technology Fund, Clean Growth Fund, XTX Ventures, and Dorilton Ventures.
The funds were aimed at advancing their technology development and initial market entry.
In October 2024, Oriole Networks secured $22 million in Series A funding, bringing the total funding raised to date to $35 million. This funding round was led by Plural, a venture capital firm that has also supported other innovative tech companies like VSParticle and Teton.ai.
Other existing investors participated in this round, including the UCL Technology Fund, XTX Ventures, Clean Growth Fund, and Dorilton Ventures.
“Our company has experienced rapid growth, and there is an urgent need for our energy-efficient AI solutions. Oriole’s technology has the potential to reshape the AI industry by solving bottlenecks in energy consumption and competition at the hardware layer.” said James Regan – CEO of Oriole Networks
Oriole Networks is expected to use the Series A funds to accelerate its development efforts, engage with early adopters, and solidify partnerships with high-volume suppliers to ensure a stable production pipeline.
The company’s photonic-based AI network is likely to be tested by select customers within the next year.
Ian Hogarth – Partner at Plural (lead investor in the Series A round), stresses the promising future of Oriole’s photonic technology, which is grounded in extensive research.
He believes this technology could lead to major advancements in AI infrastructure, primarily by enhancing energy efficiency and decreasing delays in data processing within data centers.
By 2025, Oriole aims to have its technology commercially available, which could disrupt the AI sector by offering more sustainable, scalable, and efficient solutions for LLM training.
This could potentially spark interest from larger tech players or become a target for acquisition as AI energy consumption concerns grow.
According to Rhodium Group, based on their projections, if data center demand triples by 2035, power sector emissions could rise by more than 56%.
What you need to know
Oriole’s “super-brain” concept hinges on using light to connect thousands of AI chips together, creating a network that can train advanced LLMs faster and more efficiently than current methods.
Typically, the training of LLMs like ChatGPT and GPT-4 requires immense computational power and energy. Stanford University research has found that running a single ChatGPT query consumes more than 25 times the energy of a typical Google search.
These energy demands are skyrocketing as AI models grow more complex, with computing power for AI expected to increase a million-fold over the next five years.
Sajjad Moazeni, a computer engineering researcher at the University of Washington, emphasizes the significant difference in computational demands between generative AI systems and traditional online services. Generative AI requires much more data processing, estimated to be 100 to 1,000 times more resource-intensive.
“In the back end, these algorithms that need to be running for any generative AI model are fundamentally very, very different from the traditional kind of Google Search or email. For basic services, those were very light in terms of the amount of data that needed to go back and forth between the processors.”notes Moazeni
The AI lifecycle impacts the environment most significantly in two stages: training and inference. While training accounts for around 20% of the energy used, inference — where AI models are applied to real-world tasks — dominates with 80%.
As the demand for AI applications continues to surge across sectors, this environmental footprint is projected to increase dramatically. Addressing this now is imperative if we are to ensure the sustainability of AI technologies.
TALK POINTS
These points are crafted to spark engaging discussions with your peers, helping you stay ahead of industry shifts, explore fresh perspectives, and drive informed decision-making in your professional circles. Whether you’re networking, meeting with partners, or leading your team, Talkpoints equips you with timely insights to elevate the conversation.
1.The future of AI networking
Photonic technologies harness the power of light to enhance data processing and transmission, offering a new approach that is both faster and more energy-efficient compared to traditional electronic methods.
In photonic systems, information is transmitted using light instead of electrical signals. Light can travel incredibly fast—around 186,282 miles per second (300,000 kilometers per second)—and can carry much more information simultaneously than electrical signals.
For example, when data is sent through an optical fiber (the same technology used in high-speed internet), it’s transmitted as pulses of light.
These pulses can be modulated (turned on and off quickly) to represent different bits of information, enabling faster communication over long distances with minimal signal loss.
Photonic technologies utilize specialized components, such as lasers, waveguides, and detectors, to manipulate light and perform calculations. For instance, lasers generate light, while waveguides direct that light along specific paths to perform operations similar to electronic circuits.
In a photonic neuromorphic computing setup, these components can be arranged to mimic the way the human brain works by creating artificial neurons and synapses that process information using light.
This means that instead of using electrical currents to fire neurons and transmit signals, the system uses light pulses, which can process data faster and more efficiently.
These photonic systems can handle large amounts of data quickly, which is essential for AI applications that require processing vast datasets (like image recognition or language processing).
By integrating photonic technologies into AI hardware, researchers can create machines that not only think faster but also consume less energy, making them more sustainable.
2. Diversifying energy sources is crucial
Data centers that power AI are projected to consume up to 14 gigawatts (GW) of additional power by 2030, putting pressure on electricity grids and energy resources, as highlighted in the Bank of America and McKinsey reports .
According to The New York Times, major tech companies are making significant financial commitments to nuclear energy as they seek reliable, emissions-free power for their rapidly expanding AI operations. Microsoft, Google, and Amazon have all made recent investments aimed at integrating nuclear power into their energy strategies.
Microsoft has struck a deal with Constellation Energy to revive the Three Mile Island nuclear power plant in Pennsylvania. Additionally, Microsoft has committed to purchasing power from Helion Energy, a startup focused on nuclear fusion, with plans to have the world’s first fusion power plant operational by 2028.
Amazon recently spent $650 million to acquire a data center campus in Pennsylvania that will be powered by an existing nuclear plant. In a separate move, Amazon announced its investment in X-Energy, a startup developing small modular reactors (SMRs), a more compact and potentially cost-effective nuclear technology.
Google has partnered with Kairos Power, a startup also working on SMRs, and expects the first reactors to be running by 2030.
The bottom line
The rapid growth and integration of generative AI technologies pose significant challenges regarding energy consumption and environmental impact. Key industry players, such as Oriole Networks, are emphasizing the urgent need for innovative, energy-efficient solutions to address these challenges.
Through advancements in photonic technology and improved network designs, there is potential to reshape AI infrastructure, enhance energy efficiency, and mitigate the rising carbon footprint associated with data centers.
Additionally, diversifying energy sources is crucial to support this technological evolution sustainably, ensuring that the increasing energy demands of AI are met with renewable and less environmentally harmful energy options.