Two myths around Chat GPT
ChatGPT has surged in huge popularity after its public release by late 2022. Some call it the proof that AI is at the verge of bypassing human intelligence, some call it stupid. Like usual, when a new major product hits the market and seems to be revolutionary, it is wise to re-frame the full context. The reality is in between those two extremes. Jacques Bughin, Senior Advisor to Fortino and Professor at Solvay Business School describes, as example, two myths to quickly debug linked to ChatGPT.
Myth 1: A breakthrough innovation
ChatGPT has indeed taken most of us by storm recently, ut contrary to wisdom, has not come easily and quickly. It has emerged out of a long multidecade journey of #ai research with many dead-ends and failures. It started already in the Sixties with expert systems, such as MYCIN or XCON; or chabots such as Parry, but all faded very quickly as not particularly powerful besides very precise functionality and domains. One had to wait the Nineties and the internet to see new attempts of machine expertise, with Berners-Lee pushing the semantic web, then this new century, with Google’s knowledge graph and IBM Watson, and Apple’s Siri on the iPhone.
The real revolution however has come after more than 50 years of trials and errors-- through large pre-trained language models (llms) based on deep networks of the transformer type. That is, the real breakthrough comes from massive digital markers (our activities on the web), and AI machine learning models such as reinforcement learning. Said differently, ChatGPT will be not what it is without big data and AI.
Myth 2: ChatGPT converges quickly to omniscience
Each new version of Chat GPT (now version 4), has demonstrated that chatGPT is capable of effective dialogue responses, solving matters, it is often said at capability level as good as the one of a teenager. It works with multiple languages, answers in various tones, creates codes; and answers exams rather effectively, the story goes.
But, despite all those benefits, it is crucial to acknowledge limitations and biases that are likely not to be easily solved. This is because chatGPT, as said, is based on data and uses a reinforcement learning model that by itself is subject to limitations.
Regarding data bias and ethics, this is a long known and complex problem to solve. One could remind how early chatbots by Microsoft or Facebook-Meta were spreading abusive language. ChatGPT attempts to preempt this, and even recognizes it, when asked if it provides biased answers, as it clearly states that it is designed to recognize patterns in data, that are inevitably the results of history, social norms, and behaviors. Today, biases tend to be minimized by humans adaptations, but those human actions are costly, and also not immune to biases.
Regarding omniscience, omniscience is more than the question of literacy, but a question of capabilities such as creativity, reasoning etc. While chapt GPT goes a long way to substitute human tasks, as we long know from AI automation, It is far for taking over higher level capabilities. A case in point is that ChatGPT is recognized as being strongly able to code, possibly more effectively than being able to generate text.
But there are a few catches. First, this is possibly because the programming universe is more confined, and predictable than any form of other types of languages. Second, ChatGPT can also perform generic functions or repetitive code, or can debug programs, but has yet to build new creative programs, let alone that programmers roles are more than pure coding. Third, multiple studies have demonstrated that coders with AI assistants delivered more buggy and more insecure codes than coders without the chatGPT like support. Finally, there is danger in using ChatGPT for creating malicious code, and that must be fully controlled for obvious reasons.
Concluding? ChatGPT is a formidable advance from AI, to say the least. It is important that we embrace those innovations, but also that we are aware of their benefits and caveats. The later should not be there to emphasize drawbacks- rather to make sure people do not "fly blind" and that others have a clear working agenda to fix the innovation to even more larger benefits than current. The journey is on.
Article by Jacques Bughin, Senior Advisor to Fortino Capital