Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Google is reportedly using Claude to improve its Gemini AI

This move, if accurate, raises questions about ethical practices and compliance in the AI space.

Kelechi Edeh profile image
by Kelechi Edeh
Google is reportedly using Claude to improve its Gemini AI
Photo by Solen Feyissa / Unsplash

The competition in AI development has always been intense, but Google’s latest efforts to refine its Gemini AI seem to be taking the rivalry to a new level. Recent reports reveal that contractors evaluating Gemini’s outputs have been comparing them against Anthropic’s Claude—a model renowned for its focus on safety and precision.

These evaluations, which assess parameters like truthfulness, verbosity, and safety, have highlighted key differences between the two models, according to TechCrunch. Claude, for instance, often avoids responding to unsafe queries, whereas Gemini has faced criticism for outputs flagged for safety violations, including explicit content.

Adding to the intrigue, some contractors also reported encountering outputs from Gemini that explicitly referenced Claude, with one stating, “I am Claude, created by Anthropic.”

Anthropic now has a desktop app for Claude, its AI assistant
It can now compete with OpenAI’s ChatGPT and Microsoft’s Copilot for a share of the desktop market.

While benchmarking against competitors is standard in AI research, this practice has sparked ethical questions. Anthropic’s terms of service prohibit using its models for competitive development without explicit approval. Google, a major investor in Anthropic with a $2 billion stake, insists that these comparisons are part of standard industry practices and do not violate any agreements. However, the lack of clarity about whether explicit permission was granted has fueled ongoing debate.

Further complicating matters, contractors have raised concerns about being asked to evaluate prompts outside their areas of expertise, such as sensitive healthcare topics. This has cast doubt on the reliability of Gemini’s outputs in critical domains, adding another layer of scrutiny to Google’s approach.

As Gemini evolves, with its Gemini 2.0 Flash model still in experimental stages, Google’s efforts to close the gap with competitors like Claude and ChatGPT reflect the high stakes of the industry. And balancing technical advancements with ethical practices and compliance will likely be key to its long-term success.

Whether this approach sets a precedent or triggers regulatory scrutiny, it underscores the fine line between innovation and controversy in the race for AI dominance.

Google’s latest update to Gemini takes on OpenAI’s reasoning AI model
But for now, it is still in its experimental stages.
Kelechi Edeh profile image
by Kelechi Edeh

Subscribe to Techloy.com

Get the latest information about companies, products, careers, and funding in the technology industry across emerging markets globally.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More