How ChatGPT (and GPT-4) is Changing the AI Climate
Artificial Intelligence (AI) has seen massive growth in recent years, and it’s difficult to discuss modern technology without at least recognizing the impact of artificial intelligence. Lately, OpenAI’s ChatGPT has propelled AI into the mainstream and caused an amazing amount of interest in the field from corporations and individuals alike. With backing from the likes of Microsoft, it’s clear that the AI technology developed by OpenAI isn’t going anywhere anytime soon.
In fact, OpenAI and Microsoft have already started integrating OpenAI technology with Azure, Microsoft’s cloud computing platform. They have even recently announced the upgrade of ChatGPT’s model from GPT-3.5 to GPT-4, which we’ll take a further look into later in this blog. With rapid advancements, changes, and daily announcements in AI, it’s hard not to wonder how it has even gotten to this point. In this post, I explore the evolution of Artificial Intelligence and how it’s being utilized and developed today by companies such as OpenAI and Microsoft. I will cover the partnership between these two companies, including their joint development of cutting-edge AI technologies and the impact of these advancements on the field of artificial intelligence. By the end of this blog, you will have a better understanding of the role that OpenAI and Microsoft are playing in shaping the future of AI.
The Inception and Evolution of AI
The modern history of AI began in the mid-20th century, and the inception of AI can be traced back to a summer research project at Dartmouth in 1956. A workshop was organized by John McCarthy, who is considered by many to be the founding father of AI, and included Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The workshop brought together researchers from various disciplines to discuss the possibilities of creating machines that can exhibit human-like intelligence. This is where the term “artificial intelligence” was coined.
In the late 1950s and 1960s, AI research focused on developing algorithms that could perform tasks such as playing chess and solving mathematical problems. One of the most notable achievements during this time was the creation of the first AI program, the Logic Theorist, by Allen Newell and Herbert A. Simon in 1956.
In the 1970s, AI research faced a setback due to the lack of computing power and the inability of machines to handle the complexity of human-like intelligence. This period is known as the AI winter, and it lasted until the 1980s.
In the 1980s, there was a renewed interest in AI research, and the focus shifted to developing expert systems. Expert systems were designed to mimic the decision-making abilities of human experts in specific domains. They were used in a variety of applications, such as medical diagnosis and financial planning.
In the 1990s, AI research continued to evolve, and new technologies, such as neural networks and genetic algorithms, were developed. Neural networks were inspired by the structure of the human brain and were used for tasks such as speech recognition and image classification. Genetic algorithms were used to evolve solutions to complex problems.
In the 21st century, AI research has made significant strides due to advances in computing power and the availability of large amounts of data. Machine learning, a subset of AI, has become one of the most popular areas of research and has been used in a variety of applications such as natural language processing and autonomous driving.
Modern AI Technologies
Due to the popularity of AI in recent years, there are many technologies emerging as a result. One such technology is machine learning, which involves training algorithms to learn from data and make predictions or decisions without being explicitly programmed. Natural language processing (NLP) is another popular application of AI, which enables machines to understand, interpret, and generate human language. Similarly, computer vision allows machines to interpret and understand visual data from the world around them, such as images and video. Robotics is yet another application of AI, which uses machines to automate tasks and perform physical actions in the real world. Other applications of AI include recommendation systems that make personalized recommendations for products, services, or content based on user behavior and preferences, speech recognition, virtual assistants, and autonomous vehicles. AI is also increasingly being used in industries such as finance and e-commerce to detect and prevent fraudulent activity and in healthcare to assist in the diagnosis and treatment of medical conditions, such as detecting cancer cells in medical images.
Enter OpenAI and ChatGPT
OpenAI was founded in 2015 by a group of technology leaders, including Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever and John Schulman. The company’s initial funding was $1 billion from a group of investors that included Reid Hoffman, Peter Thiel, and Amazon Web Services (AWS). The idea for OpenAI emerged from concerns about the risks associated with advanced AI technology. Musk and Altman were both vocal about the need to ensure that AI was developed in a way that was aligned with human values and goals rather than pursuing its own agenda. They believed that creating an open research environment that encouraged collaboration and transparency would be a key way to mitigate these risks.
The company has grown rapidly since its founding, attracting some of the top AI researchers in the field. Its research focuses on a wide range of AI topics, including natural language processing, computer vision, robotics, and reinforcement learning. In addition to its research efforts, OpenAI also collaborates with industry partners and government agencies to promote the safe and beneficial use of AI.
One of the results of OpenAI’s research and development is ChatGPT: an AI language model based on the GPT (Generative Pre-trained Transformer) architecture. It was trained on a vast amount of text data, using a technique called unsupervised learning, to enable it to generate human-like responses to natural language input.
The development of ChatGPT was a collaborative effort involving a team of researchers and engineers at OpenAI. The initial version of the model, GPT-1, was released in 2018, followed by GPT-2 in 2019, GPT-3 in 2020, and GPT-3.5 in 2022. These models were trained on increasingly large amounts of text data, allowing them to generate more complex and sophisticated responses.
ChatGPT is one application of the GPT architecture designed specifically for natural language processing tasks such as language translation, text summarization, and dialogue generation. It has been used in a wide range of applications, including chatbots, virtual assistants, and language-learning tools. The development of ChatGPT and other language models like it is a significant achievement in the field of AI, as it allows machines to communicate with humans in a more natural and intuitive way. As the technology continues to advance, we can expect to see even more sophisticated and capable language models that can understand and generate human language with greater accuracy and nuance.
Code Red at Google
While ChatGPT and the technology within were bound to make incredible impacts, it’s not likely that anyone expected how fast and widespread ChatGPT’s adoption would be. From kids asking a bot silly questions to students writing entire essays in a matter of seconds, many people were discovering AI’s abilities through ChatGPT upon its release and spreading the word. While the public response was largely positive excitement, Google saw things differently.
Less than a month after ChatGPT’s late November 2022 release date, Google declared a company-wide “code red”. The importance of AI was not lost on Google, and they had acquired an AI research company called DeepMind nearly a decade ago. Google even already had an advanced chat AI called LaMDA, short for Language Model for Dialogue Applications, to compete with ChatGPT. So, why sound the alarms? The almost immediate and widespread success of ChatGPT brought a major threat to the bread-and-butter of Google’s business model: ads. The problem for Google is that most of its income is generated in ads and if people aren’t searching and clicking through links on Google, their revenue suffers. ChatGPT has shown how AI and chat functionalities can improve, or even replace, search engines, and Google has to quickly figure out how a similar technology can be implemented into their business model.
Bard: Google’s Attempt to Compete with Microsoft and ChatGPT
Google announced its own generative and conversational AI chatbot, called Bard, which is designed to complement the company’s search engine. It is based on Google’s LaMDA language model and is capable of answering complex questions and generating new text. The company claims that Bard can condense information from dozens of web pages into just a handful of paragraphs. However, Google has taken a long time to bring Bard to market as the chatbot’s ability to generate text depends on its training data and could potentially lead to the spread of misinformation. There’s potential for integrations with Google services like Gmail and Docs like Microsoft’s already-integrated Office apps. Bard is currently only available to “trusted testers” and Google says it will open up access to the general public in the coming weeks. With the confidential and need-to-know nature that Google is handling Bard with, we may not know its full capabilities until it’s released. What we don’t know is how it actually will perform once it is released to the general public.
Microsoft: No Risk, No Reward
On January 23, 2023, Microsoft announced the third phase of its partnership with OpenAI, which includes Microsoft’s investment of $10 billion to OpenAI. The investment, which follows a $1B investment in 2019 and another round in 2021, will give Microsoft access to some of the most popular and advanced AI systems, allowing the company the edge against rivals Google (Alphabet), Amazon and Meta Platforms. In turn, OpenAI will gain access to Microsoft Azure’s cloud-computing power to run its increasingly complex models, which enable its programs to generate images and conversational text.
This partnership doesn’t only apply Microsoft’s Azure services, though. Microsoft is preparing to showcase how OpenAI’s technology will transform its core productivity apps like Word, PowerPoint, and Outlook. Additionally, Microsoft has already partially released their search engine, Bing, with AI integrations. Microsoft is moving quickly with this integration mainly because of Google. Bing can already generate tables and charts for basic data, but transforming those into visual graphics for use with Office apps is a logical next step. Microsoft CEO Satya Nadella, who is pushing hard for AI in Microsoft due to the positive response to ChatGPT, is keen for the software maker to be seen as a leader in AI and counter any response from rival Google.
Oh, The Irony
One of the key principles of OpenAI at its founding was openness, which means that the company’s research is published and made publicly available for others to use and build upon. OpenAI has also released several open-source software tools, including the GPT language model and the Gym reinforcement learning toolkit. This is likely one of the contributing factors as to why investors like AWS believed in OpenAI. The irony of AWS being an initial investor is that AWS and Microsoft Azure are the two biggest rivals in the cloud computing space right now. With Microsoft’s partnership with, and integrations of, OpenAI in its Azure platform, it looks like Microsoft gained off of Amazon’s investment. Time will also tell how OpenAI’s value of openness will hold up as it transitions from nonprofit status to the proprietary, for-profit sector.
Microsoft didn’t only seem to gain from only one rival, though. The technology at the core of ChatGPT’s model is based on a type of neural network architecture known as a transformer. The transformer architecture is particularly well-suited to language modeling tasks because it is able to process entire sequences of words at once, rather than processing each word independently. This is accomplished through the use of self-attention mechanisms, which allow the model to weigh the importance of each word in the sequence based on its context. Funny enough, this transformer architecture was developed by a team of researchers at Google.
GPT-4: The Latest Model For, But Not Limited To, ChatGPT
OpenAI has developed GPT-4, a large multimodal model that can accept both text and image inputs and emit text outputs, exhibiting human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. It is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5. GPT-4 considerably outperforms existing large language models, alongside most state-of-the-art (SOTA) models, and can accept a prompt of text and images, generating text outputs given inputs consisting of interspersed text and images. However, it still has limitations and is not fully reliable. Image inputs are still a research preview and not publicly available. GPT-4’s text input capability will be released via ChatGPT and the API, and its image input capability will be made available through a single partner. OpenAI is also open-sourcing OpenAI Evals, its framework for automated evaluation of AI model performance, to allow anyone to report shortcomings in its models to guide further improvements.
Other AI Tools
Although Google and Microsoft with OpenAI are currently leading the race in the AI industry, it’s important to note that there are several other AI tools available. Aside from ChatGPT, OpenAI has various AI projects. OpenAI’s Dall-E can produce art and images from a textual description. Another example is Salesforce’s Einstein AI, which is a business intelligence AI that acts as a smart assistant, providing recommendations and automating repetitive data input for employees, thus enabling them to make informed decisions. What’s more, not all AI tools are backed by big corporations. CodeProject.AI is a prime example of a great tool that provides image and video analysis for applications such as object detection in a security system.
The Future of AI
The future of AI is promising and holds immense potential for advancements in various industries. AI will continue to revolutionize the way we work and live, transforming business operations and enhancing human capabilities. We can expect to see more sophisticated AI systems that can perform complex tasks with greater accuracy, efficiency, and speed, which will lead to increased productivity and profitability.
In the near future, we will see AI being integrated into more consumer products and services, such as virtual assistants, smart homes, and self-driving cars. We will also see AI being used more in healthcare to assist in diagnosis and treatment, as well as in education to personalize learning experiences for students.
As corporations compete to be at the top of the AI industry, AI technologies will continue to develop rapidly. We may see the emergence of more advanced forms of AI, such as Artificial General Intelligence (AGI), which can perform tasks across a wide range of domains and potentially surpass human-level intelligence. However, AGI raises significant ethical concerns, and it will be crucial to ensure that AI development is aligned with human values and goals.
Overall, the future of AI and the impact of artificial intelligence is exciting, and we can expect to see continued growth and advancements in this field, which will undoubtedly have a significant impact on society and our daily lives.
Contact Us
For more information on this topic, contact our Digital and Technology Transformation Team.