All artificial intelligence (AI) methods today are around machine learning modelling and use some method of sophisticated correlation or association —  which can be approximated to brute-force robot learning. In other words, we’re talking about reverse engineering existing features or patterns and providing useful “forward engineering” solutions like:
  • Self-driving cars
  • Detecting diseases from X-rays/MRIs
  • Robots in manufacturing
  • Chatbots for customer service
  • (Insert the next big future application here)
The logic is that X event follows Y — or even that X event occurs with Y in historical and simulation data. Based on that, we can create automated models to predict an unknown object, variable or situation and even prescribe actions. However, today’s machine learning is about figuring the “what” in images, speech, numbers, translation and text — and not so much about the “why.”
 
“What” can work if the environment under which training happened also occurs to some extent during prediction, at least the context — right? The question so far has been, “can we figure out all the ‘whats’ in our models?” The answer is yes, to the extent we can train on lots and lots of data within the same context. Examples of context would be playing chess or Pokemon Go, driving on streets, website browsing etc. Techniques such as deep learning/reinforcement learning with GPU hardware, big cluster farms — and days/weeks of training — make it possible.
 
The idea of machine learning or deep learning is to mimic the brain’s logic by feature extraction, memorizing and generalizing instances of interest. Human brains work both with memorizing and  generalizing problems — but the brain is also able to “creatively infer causation” — in other words, the “why.” This is yet to be seen in AI algorithms today and — based on the kind of progress we see — may move us toward general AI or start a new AI winter.

Today’s AI cannot creatively infer causation on its own 

Here are some challenges in today’s AI world:
  1. The models can tell that the sunset in a beach will be red or yellow. AI cannot tell why it is so on its own. It will not know that the sun’s rays scatter differently with red wavelength and also because the atmosphere has pollutants.
  2. The models can tell that an X-ray image shows a cancerous polyp, but it cannot tell why. It will not know that the polyp is caused because of DNA mutation, food factors and an external factor/trigger/environment from six months ago.
  3. The models can tell that an umbrella in a picture is for either rain or hot sun, but cannot tell why it was designed for either purpose in the first place. It will not know that the length and colour of the umbrella were designed to reflect the rays of the sun, balancing with the average wind speed, not to fly off.
  4. The models can tell that a behavior is fraudulent/suspect, but cannot really explain why the fraudster is targeting this particular business and using this technique.
  5. The models can turn on chatbots to answer questions intelligently by learning from a large corpus of chats, text or Q&A from the past. It will, however, miss sarcasm and humour or main intent at the outset.
The question so far has been, “can we figure out all the ‘whats’ in our models?”

Using circumstantial evidence (WHAT) is different than finding probable cause (WHY)

The above examples assume AI bots don’t have access to the internet to cheat and look up recorded facts in Wikipedia, again messing with what’s recorded somewhere, instead of doing research on its own and coming to new conclusions or opening doors and discovering new facts.
 
Sadly, today data scientists focus on how we are arriving at “what” in our models, but dumbing down on the “why” question. Clearly, human intelligence, creativity and ingenuity are not going to be replaced – at least in the near future. Computer mimicry will only automate results on known/learned situations given enough data — but we’re still  not quite there in terms of getting into the depths of reasoning/creative thinking: That’s what really cognitive computing is all about!
 
Nonetheless, today’s AI holds lots of useful applications for business and life using deep learning and sophisticated machine learning mimicry. There are thousands of use cases that can benefit from machine learning/deep learning. However, it’s important to know what today’s AI is and is not.
Karthik Guruswamy

Karthik Guruswamy is a business-first data scientist with Think Big Analytics, a Teradata company. He has worked in the database and analytics space for 20-plus years in several roles, starting out as an RDBMS server developer and later as a server architect, building data infrastructure for startups from the ground up. Today he is a principal data scientist, advising Teradata’s customers on data science use cases.

Karthik co-founded two Silicon Valley startups and was an early employee of Aster Data, which was acquired by Teradata in 2011. During the course of his startup career, he was awarded several patents in the areas of virtualization and ad networks. Karthik is also a frequent speaker at Teradata Partners conferences and blogs extensively about data science on LinkedIn and the Aster Community Portal.

Karthik is a passionate expert on building game-changing analytic solutions that allow customers to rapidly find business insights in their data. He works on both unstructured and structured data, using a wide variety of algorithms to unravel hidden structures. He uses big data technologies such as MapReduce and graph engines. He is also well-versed in traditional SQL, Python, Perl, R and Java, with expertise in classical machine learning, deep learning, text mining, time series and Markov models, pattern recognition, graph theory, and ensemble learning techniques.

View all posts by Karthik

Related Posts