BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI History And Future: Artificial Intelligence Series 1/5

Following

“The present moment is the futuristic dreams of yesterday.” - Nicole Serena Silver

As we enter into the anticipated era of Artificial intelligence (AI) and machine learning there is both fear and excitement for what is to come. The speculations of this day and age date all the way back to ancient Greek mythological robots. Human integration with technology has always been intertwined. This is in part what makes us unique from other species. We create and use tools to enhance our lives. From fire to the steam engine to AI. However, the speed of technology development is exponentially changing which can cause unprecedented impacts.

The book The Techno-Human Condition highlights the complexities of the butterfly effect when new technologies are introduced. The authors give the example of how when the train was invented, its sole purpose was to get passengers from point A to point B. The development of the train system presented an unexpected and now crucial new system, universal time. Universal time was needed for the scheduling of the train. The inventors had no idea this would be a secondary impact. This is true for any new advances in technology. We can never fully predict the additional influences innovation will have. It is also rare for entrepreneurs to take the time to examine the potential of the butterfly effects their business may have. The best way to predict potential outcomes from advances in technology is by analyzing history, logic, science, and sociological patterns. All of which we will examine in this article when it comes to AI.

AI Origins

In 1956, a group of researchers organized a workshop at Dartmouth College, where they proposed the concept of "artificial intelligence" and set out to explore its possibilities. From there, the field of AI continued to evolve, with researchers exploring different approaches and techniques to create intelligent machines. John McCarthy is widely credited with coining the term "artificial intelligence" and developing the concept of the "LISP" programming language, which was used in many early AI systems. Marvin Minsky and Seymour Papert created the first neural network computer model, while Allen Newell and Herbert Simon developed the first AI system, the General Problem Solver.

Some of the earliest use cases of AI include speech recognition and natural language processing. In the 1960s and 1970s, researchers developed programs that could recognize and respond to human speech, paving the way for modern-day digital assistants like Siri and Alexa. Other early applications of AI include game playing, where computers could play games like chess and checkers at a competitive level, and robotics, where machines were developed that could perform tasks like assembly line work and welding.

One of the most significant early applications of AI was in the field of medicine. In the 1970s, researchers began developing programs that could assist with medical diagnosis, analyzing patient data to identify potential health issues. While these early systems were limited in their capabilities, they paved the way for modern-day medical AI systems, which can analyze vast amounts of data to assist with diagnosis, treatment, and drug development. It was because of AI that we were able to develop the Covid-19 Vaccine in record time.

There was a pause in AI development in the 1980s-90s. Despite initial enthusiasm, progress in AI faced significant challenges, leading to a period known as the AI winter. Funding for AI research decreased, and there was a general disillusionment with the limitations of AI technology. In the 1990’s-2010 there was a reemergence of AI and machine learning. Researchers began utilizing statistical methods and large datasets to train AI systems. Notable milestones include the development of neural networks, support vector machines, and deep learning techniques.

For the past decade-plus, there have been significant breakthroughs in AI due to deep learning, a subfield of machine learning, that gained prominence due to advancements in computational power and the availability of large-scale datasets. Deep neural networks achieved breakthroughs in areas such as image recognition, natural language processing, and game-playing AI. This sparked a growing interest from entrepreneurs, and investors and many companies invested heavily in AI research and deployment.

AI is now omnipresent in our daily lives. It powers virtual assistants, recommendation systems, smart home devices, healthcare, finance, transportation, and many other industries. However, it didn’t take up too much mental space for everyday people. It wasn’t until this year that AI became democratized and universally used through applications such as ChatGPT and image editing applications. AI moved from behind the curtains into the hands of the public domain. We are all experiencing the power of AI. The floodgates of how we can apply AI are opened and we were all drowning in possible uses. The excitement and the fears of AI are both very relevant. With any powerful innovation, the outcomes depend on if it is a good or bad player utilizing how the technology is being used. Although it may not feel like it, we are still in the early days of AI.

ChatGPT and Other LLMs (Language Learning Models)

Will applications such as ChatGPT change the job market? Yes.

Will it help entrepreneurs? Yes.

Will it change education? Yes.

Will we adapt? Yes.

ChatGPT is a fantastic tool for basic structure and doing the heavy lifting for mundane tasks, but can not YET create sophisticated content. Text-generating applications give a good structure for first drafts but lose the depth and engagement of content. Additionally, you need to fact-check chatbots because they sometimes give misinformation. This could change and it could change quickly. As these applications are trained through consistent use and with an average of 60 million active visits per day ChatGPT is ingesting an abundance of data.

A word of caution, be careful when uploading information into AI applications. Once information is uploaded into these systems it is incorporated into their data modeling systems and can access and use your proprietary knowledge. They are starting to come out with provisions where you can indicate that you don’t want your information used to train the AI system, but the safest bet is to exclude your Intelectual Property from being used. It should also be noted that it is illegal in some states and countries to upload client/customer information into AI systems without a signed agreement to do so.

ChatGPT is about to blow the roof of educational institutes and the educational industry is not prepared. The education industry is slow to adapt and full of bureaucracies. However, text-generating applications will forever shift education. Innovation will be needed sooner than later to prepare this generation for a new wave of education and for job market shifts.

Final Thoughts

It is inevitable that AI will be integrated into our lives. While AI has the potential to revolutionize various industries and improve the lives of people worldwide, addressing the challenges, limitations, and ethical concerns associated with AI is crucial to ensure its responsible development and implementation. By acknowledging these issues and working towards solutions, we can harness the power of AI to create a better future for all.

Learn more about the benefits and downfalls, and disruption of AI, in this series:

AI History And Future

AI’s Influence On Jobs And The Workforce

The Future Of Education - Disruption Caused By AI And ChatGPT

AI Regulation, Why Experts Are Calling For Slowing Down Artificial Intelligence

AI Utopia And Dystopia. What Will The Future Have In Store?

Follow me on Twitter or LinkedInCheck out my website or some of my other work here