SAN FRANCISCO, Oct. 24, 2024 — Quantum AI software company Multiverse Computing announced its expansion to the U.S. with the opening of an office in San Francisco.
The company said its entry into the U.S. enables it to accelerate adoption of its AI and quantum-based solutions by U.S. based customers, including government and businesses.
Headquartered in Spain, Multiverse also has offices in Canada, France, Germany, and the UK.
The San Francisco branch of Multiverse Computing will be led by Chris Zaharias who was recently named Vice President of Sales. Chris has managed sales at numerous venture-funded software startups, including Omniture, Efficient Frontier which was acquired by Adobe and the first firm to apply algorithms to the optimization of paid search advertising, his own SaaS firm SearchQuant which had 400 startup customers, and 5,000-person AI data annotation firm iMerit.
“We recognized we needed a strong leader to continue spearheading projects with quantum and AI-based solutions in the all-important North American market and Chris fit the bill,” said Enrique Lizaso Olmos, CEO of Multiverse Computing. “We are excited to open our office in San Francisco, and now have the experience and the skills to achieve our goals in the U.S.”
“Mulitverse Computing is a groundbreaking organization, and I’m thrilled to lead its expansion into the U.S. market,” said Zaharias. “Being in San Francisco, we have access to some of the best tech talent in the world, who can help take our company to new heights.”
Recently, Multiverse Computing was selected to participate in the 2024 AWS Generative AI Accelerator program – a competitive program that selected 80 startups out of thousands of applicants to receive AWS credits, mentorship, and other resources to further research into AI and ML and expand their businesses. The second accelerator class will have the chance to showcase their work to an audience of industry professionals and AWS leaders at re:Invent 2024 in Las Vegas in December.
With the help of this program, Multiverse Computing plans to expand the capabilities of CompactifAI, a software that uses tensor networks to optimize large AI models by creating smaller, more efficient versions.
CompactifAI reduces the significant energy demands required to train and run large language models (LLMs) like ChatGPT and Bard. The software also can reduce development costs and make it easier to integrate these models into more digital services.
Copyright for syndicated content belongs to the linked Source link