Former Google consultant says Gemini is what happens when AI companies go 'too big too soon'

Joe Toscano said AI 'algorithmic audits' should be conducted on AI companies to promote transparency

A former Google consultant said the backlash to the company's Gemini artificial intelligence (AI) resulted from going "too big too soon" and floated several ideas for how Big Tech can offer transparency to the public.

Speaking with Fox News Digital, DataGrade CEO Joe Toscano said Google was left without a solution when groundbreaking tools like ChatGPT and Midjourney came to market and has since been trying to meet the competition.

"Unfortunately, in corporate catch-up, sometimes you run into this space where you launch too early. And in the case of AI, it's like, oh, well, now we just saw behind the cloak, right? If it's not curated well enough or practiced well enough, you can very quickly see that what we believe to be magic is simply a lot of trouble," Toscano said.

In the modern machine learning environment, Toscano said tech companies expect the machine to learn on its own and the outcomes to be more or less "bulletproof."

GOOGLE RELEASES NEW GEMINI UPDATE TO GIVE USERS ‘MORE CONTROL’ OVER AI CHATBOT RESPONSES

Joe Toscano former Google consultant on Gemini

DataGrade CEO and former Google consultant Joe Toscano said Google should have held off on releasing Gemini until it was ready.  ((Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images) / Getty Images)

He also stressed that creating an intelligent system specializing in one specific field is already extraordinarily difficult. As such, it is an "incredible feat" for Google to try to build an AI that can answer nearly any question in a variety of languages.

However, Toscano believes that the product was rushed out the gate and resulted from improper training rather than malicious intent.

Google has apologized after Gemini gave eyebrow-raising responses and produced historically inaccurate images. The image generation feature of the product has since been paused and will be re-released in the coming weeks after fixes are implemented.

While ChatGPT was quietly being built for many years, Google invested in other avenues of machine learning rather than solely focusing on conversational AI. Toscano suggested it was likely a part of Google's AI roadmap, but the explosion in generative AI products and the subsequent value seen by shareholders forced the company's hand.  

"It was probably just not the best strategic move for Google, who should have sat back and said, we're working on it. We'll release it when it's ready. But that's not really the motto of Silicon Valley either. Never really has been. So maybe that'll change now that we saw this faux pas and some big struggles," Toscano said.

The DataGrade CEO said that it is up to companies to validate the value of their business and its outputs. However, he suggested that Silicon Valley implement several policies that would insulate them from harm and bring some transparency to the public.

GOOGLE GEMINI USING ‘INVISIBLE’ COMMANDS TO DEFINE ‘TOXICITY’ AND SHAPE THE ONLINE WORLD: DIGITAL EXPERT

Photo illustration of Google's AI model Gemini

The Google AI logo is being displayed on a smartphone with Gemini in the background in this photo illustration, taken in Brussels, Belgium, on February 8, 2024.  (Jonathan Raa/NurPhoto via Getty Images / Getty Images)

In 2023, most AI-related laws were part of larger comprehensive state consumer privacy laws, focusing on user data. While several federal agencies have begun issuing guidelines on AI, there is currently no federal legislation on the topic.

Toscano said companies need to document, at minimum, the decisions used in their AI processes, the people involved, and the training data fed into the system.

"It's not going to be perfect. And with the machine learning systems that we have today, it's almost impossible to actually untangle everything and fully understand how it happened. But we have to have something to base on and to hold responsible," he said.

Toscano also expects to see a growing industry of "algorithmic audits" or reviews of tech processes like the examination of financial statements in other industries.

He said these audits must be handled by "specialized knowledge professionals" independent of the organization they are reviewing. Controls should also be in place to avoid regulatory capture, an issue that has percolated in financial industries.

Toscano said if a company does not get the rating it wants, it should not be able to go to the next firm and pay them to acquire the necessary scoring.

GOOGLE CO-FOUNDER SAYS COMPANY ‘DEFINITELY MESSED UP’ ON GEMINI'S IMAGE GENERATION

An illustration with the Google logo and a figure representing artificial intelligence

Google logo and AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023.  (REUTERS/Dado Ruvic/Illustration/File Photo / Reuters Photos)

"A regular intermittent audit system can be similar to, let's say, a drug check in pro sports that can happen at any time and has to be complied with to ensure that what we're looking at is safe and that all the controls are in place to avoid reckless negligence," Toscano said.

He also expressed concern with how companies and governments archive information.

If politicians attempt to leverage AI technology and push narratives onto the public while websites continue to be pulled down after a set period, Toscano said he believes humanity will enter a future where physical materials, such as paper, could grow in value. This shift will likely occur as physical materials increasingly become the most concrete way to assess truth versus fiction for historical documentation.

In some of his ventures, Toscano and his colleagues create logs and estimates digitally but then work on ways to produce paper backups. If the internet goes down, they can continue working without a severe hit to their business.

Last week, Meta suffered a massive outage after users worldwide reported that they could not access Facebook, Messenger and Instagram. The outage lasted almost two hours. While it is unclear how much revenue the incident cost Meta, the company lost $100 million during a 7-hour outage in 2021.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Toscano said it is not merely a fear of a world leader deciding to remove information. Information can also become irretrievable because of a person running a website or a new back channel on the web where services are offered to help people wipe out information about themselves.

According to Toscano, the implications of AI and digital information go from the macro level that drives democracy to the micro level of conversation pieces, which will only get more troubling as countries engage in cyberwar and attempt to shut down infrastructure.

"Deleting archives and controlling information is now the modern war," Toscano added. "We see lots of bombs right now over in Israel, over Ukraine, all of that. But what we don't see… is this invisible hand that is controlling the information ecosystem that now controls the narrative of our societies."