AI in General
What is AI?
Dan True
Head of Solutions
Is AI and Machine Learning the same thing?
What about DeepLearning?
AI is all the hype these days: Everyone and their dog is talking about it on LinkedIn, and it seems like AI experts are crawling out the woodwork. As someone who began studying AI back in 2008, at the thawing of the last AI Winter – I’ve seen the field of practitioners grow exponentially in the last few years, with many of them not seeming to realise that this field of Computer Science arose in the 1950s – and its formal foundations stretch back much further. Back in the 1970 people had the same discussions as now about what was just a program and what was AI, while businesses tried to sell a vision where AI ran on specialized hardware that they could sell us (lookup LISP machines). There’s a famous quote (from 1971!):
“AI is a collective name for problems which we do not yet know how to solve properly by computer” – by Bertram Raphael (co-creator of the A* search algorithm)
Which logically entails that as soon as we know how to solve a problem, it stops being AI. While I don’t agree that’s how we should define AI, it accurately reflects our problems in adequately defining what it is we’re working with.
So the next time you have a discussion on LinkedIn about what AI is or isn’t, know that you’re partaking in a century-old discussion and take some pride in following in the footsteps of your ancestors. Since the field’s inception, it has become clear that to adequately define what Artificial Intelligence is, we first need to define what Intelligence itself is, which we still really can’t adequately do. I hate dependency problems.
So what is it?
Let’s take a look at what the broader field of AI is, as per mid 2025. This isn’t an exhaustive or scientific list, but it’s what I use when I try to bring my point across with my customers. I will focus on ease of understanding and explainability here, and not on making a taxonomy that will stand the test of time. I know how futile that is and I have a kettle on.
I usually divide the field into the following sub-fields: Predictive AI, Generative AI, Symbolic AI and AI Understanding.
Predictive AI
In short, Predictive AI are methods used a set of data to try to predict new data-points from the trends in the data set. This is often used interchangeably with Machine Learning, which isn’t entirely accurate (I’ll get to why), but works for most casual conversations. Predictive AI as a discipline split from the wider AI field (which was based in Computer Science) during the 1980s and grew out of a numerical/statistical approach.
Predictive AI covers a lot of different types:
- Statistical Methods: such as Linear Regression use pure statistics to predict the next value, e.g. predicting a future price of a good or service based on it’s historical value function.
- Supervised Learning: These models are given labelled data-sets and are trained to predict the label of future new examples, e.g. predicting whether a credit card transaction will be fraudulent, based on historical labels of fraudulent transactions.
- Unsupervised Learning: Unsupervised Learning models try to learn from raw data which is not labelled, i.e. predicting which customers belong to the same customer segment without having labelled previous customers beforehand.
- Reinforcement Learning: These models try to optimise a reward-function in an environment and need to constantly balance short-term and long-term gain in that reward function. Classic examples would be an AI playing chess or Go, where the reward function is a score of the current state of the board and the play tries to optimise their own score. Note that Reinforcement Learning has strong ties into Symbolic AI, which we will introduce below.
Note that Neural Networks, though hyped as ‘a model of the human brain’, is just one of many methods within Predictive AI. Most of the effort of a Data Scientist is directed at understanding the available data, testing various models to find a good predictor and fine-tuning the input to get the most accurate predictions possible.
GenAI, which I now consider its own field within AI, grew out of Unsupervised Learning prediction models trained on text, images and sound.
One key issue with Predictive AI is that it’s not explainable in general. You can do some parameter analysis to map which parameters affect the outcome the most, you generally can’t adequately explain why a certain prediction is the way it is – especially in Predictive models with many input parameters: imagine asking for an explanation of why some request was denied and receiving a list of many hundred weighted parameters back. Not exactly useful to a human.
For some use cases this is acceptable, like when predicting ice-cream sales. But when a predictive model makes a decision directly affecting a human, like predicting whether they’re likely to pay back a loan, be a good tenant in an apartment or become a criminal, it quickly becomes immoral if not straight-up illegal in some jurisdictions.
Predictive models are dependent on data – not necessarily much, but some – to do anything. That also means there is a large case of problems of which no data exists, which predictive AI is hard to apply to.
Generative AI
Generative AI (GenAI) is a discipline about systems which generate output, usually of text, images, or sound, but more edge cases also exist – such as generating 3D models from prompts or animations from motion capture. Note that under the hood many GenAI systems are built upon Predictive AI models hailing from Unsupervised Learning, where the Prediction is the next word, soundbite etc. which results in generating a stream of output. Some genAI models are more unique to this field such as Diffusion Models, which generate an entire output and then refine it towards the desired target output over several steps.
The GenAI model Large Language Model (LLM) is what most layfolk consider ‘AI’ nowadays, as it’s a system they can chat or talk to, show images and get a response and give general tasks and expect some (often) reasonable answer. As such, it covers enough useful bases broadly enough that some people can anthropomorphize it enough to consider it intelligent across a wide range of tasks.
What LLMs have done for us in the AI business is open up a whole range of new use-cases within text-heavy settings, which was hardly scratched by earlier efforts within AI: such as HR, customer service, law & regulation, marketing etc. On top of that it’s often quite good at mapping from one type of data format to another, which means we can use it to convert real-world data of various formats into a single format and then use another type of AI (Predictive or Symbolic) on the problem, where we couldn’t before.
One of the biggest issues with GenAI is that since its output space is so wide: images or text, it can be hard to verify correctness of output and thus rely on it in critical systems. Furthermore, some GenAI models (like LLMs) are notorious for hallucinating and not knowing when to say it doesn’t know – if it’s in the deep end it will usually spew several wrong answers and continue to do so, only admitting it can’t solve the problem after a lot of annoyed prompting by the user. The combination of convincing-sounding answers, a very big output space and the difficulty of verifying answers means users can easily be fooled unless a lot of engineering work goes into verifying the solution.
If GenAI is applied to a specific problem within an organisation data is often still critical to get good results – but the data here would be documents, images etc. instead of raw data points. Another key point is that GenAI models are usually very large, cloud-owned models (chatgpt, claude, mistral etc) where any improvements, training and tuning is handled by the model provider not the local organisation.
In essence, GenAI has lowered the barrier to when AI can be applied to a problem, but also muddled how we evaluate the correctness of an answer.
Symbolic AI
Symbolic AI works on ‘Symbols’ which can mean a lot of different things in different contexts. In a routefinding system, Symbols represent roads, intersections, speed limits etc. while in a chess AI the symbols represent the board, the pieces and the available moves that can be made. What makes Symbolic AI, well… AI – is combinatorial search: the ability to search through more possible states than a computer could naively do in the lifetime of the universe, much less a single turn in your current game of Civilization.
In general if a problem can be modelled as a search problem, in the broadest sense, and is of a scale where a human can’t handle the task, Symbolic AI can have something to offer, such as:
- Scheduling: planning usage of large dependant resources, such as shift planning or production planning
- Routefinding: Navigating a world or live-optimizing transport networks
- Acting: moving and taking actions in a complex environment inter-dependent environment, such as a chess engine, virtual worlds or some robots
The AI part is the various techniques available to limit the number of possible combinations actually evaluated, as many of these search spaces are incredibly large and unmanageable with classical brute-force methods. This is similar to how humans and other animals navigate between an incredible amount of available options for actions all the time. In literature this is called the intractability problem.
One key component of a Symbolic AI system is how you define those symbols. This is called knowledge representation and is essentially various answers to the question: “how do I encode this piece of information symbolically”. In practice this is done with formal and computational logics. Simple representations like the current state of a chess board are trivial, while complex relations between time, space and world entities can be incredibly hard to model.
AI Understanding
AI Understanding is the last field of AI. It’s very broad and essentially handles the big philosophical question of how AI interacts with people and the world – but dives more into specifics in various sub-fields:
- AI Ethics: What do we allow an AI to do from a value-based perspective, what laws and regulations apply to AI and how do we effective shape such laws and regulations?
- AI Adoption: How does AI interact and shape businesses, processes, employees etc. when introduced into an organization.
- Formal AI: What are scaling problems vs knowledge representation problems? Are there fundamental limits to what we can model with AI? What are the mathematical foundations of AI?
- AI Philosophy: What even is AI? How do we test for true intelligence? What does it mean to be intelligent?
Of these, AI Ethics and corresponding AI regulation is the playing field of lawyers and politicians, while AI Adoption is a job for consultants implementing AI into organizations. Formal AI and AI Philosophy is important for every AI practitioner to have a grounding in, but is developed in academic and research circles.
In my view, AI Understanding is the most important to get a broad foundation in if you want to be an AI practitioner. You may be an expert in Predictive or Generative AI, but without a firm grounding in AI Understanding, you’re bound to repeat the mistakes from the last several decades.
So, is that it?
Of course, with 4 different fields within AI there are bound to be areas where they intersect and this is where it gets really interesting. Let’s get to it.
Neurosymbolic AI
- Take one Symbolic AI system
- Pair it with a Predictive system for data-based learning or a genAI system for text/image-based interfacing. Or both
- Shake it – don’t stir.
- You now have a Neurosymbolic system.
Neurosymbolic systems are where it gets really interesting. Some of the most impactful or well-structured AI systems are Neurosymbolic and I personally believe this is where we’ll see the greatest impact of AI systems on the real-world in the coming decade. A few examples and explanations:
Google Maps and similar systems
Google Maps and its competitors are at their core Symbolic AI systems – they contain symbolic representations of roads, turns, roundabouts etc. for most of the mapped world. They also have one or more symbolic AI algorithms for finding routes from A to B in said world. But, if anyone can remember how bad the first generation of GPS systems in cars were, you can also understand why there’s a Predictive component to Google Maps: cost-estimation
The cost of taking a specific road (in time, gas, electricity etc) is predicted by a predictive AI system, based on a large number of historical traffic data for that specific road. This is what makes these systems useful in the real world, where you get a fairly adequate estimate of travelling time based on estimations of traffic for the given route at the given time.
A purely predictive AI system couldn’t return detailed routes, just estimated travel times. A purely symbolic system can’t use real-world data for correct cost estimations. Hence why the neurosymbolic combination is so successful for this use-case.
AlphaGO
In 2015 AlphaGO became the first computer program / AI to beat a professional Go player. In many ways this program by Google’s DeepMind project set the stage for the quick rise to prominence of Machine Learning models that we have experienced for the last decade.
Over the decades many people had tried using purely Symbolic AI systems to beat a professional Go player, as had happened for Chess by DeepBlue in 1997. However, the search space of Go is much, much bigger than that of chess – which is already enormous – and crucially, there is no clear way of evaluating the current score of a game. This makes it incredibly hard for a Symbolic AI to distinguish between good and bad moves it needs to consider.
AlphaGO had two main components:
- Monte-Carlo search: A specific type of Symbolic AI search, where the search takes probabilities of various actions by the opponent into account. So for each step it looks-ahead, it evaluates its own possible moves as well as several possible moves by the opponent with various probabilities of that being the move the opponent will take.
- Policy Network: A Predictive AI model was trained on recorded games by humans and further reinforced by playing against itself. This predictive model generated possible moves to consider from a given game-state. This meant the Monte-Carlo search didn’t have to evaluate all possible moves, but rather best-guess moves.
- Value Network: This value network was trained on the game history from the training of AlphaGO, to predict the probability of winning the game from a given game state. This meant the Monte-Carlo search could be guided to maximize this score when evaluating future states.
The combination of a Symbolic Combinatorial Search system and two Predictive AIs is what led to the huge success AlphaGO became.
Google’s DeepMind team continue working towards this approach to AI, by their continued work on e.g. AlphaFold that has made great strides in using Neurosymbolic AI approaches to solve the protein-folding problem. Being able to accurately predict protein folds will have great impact on fields within science, medicine and healthcare.
Rounding off
The current state of AI is dominated by fast innovation, new avenues of application and an inability to distinguish hype from progress. I hope my input to the discussion has cleared up some things for you, or has at least created some interesting questions you can wrangle with on your own.
AI today is a strange blend of maturity and mystery, sprinkled with an unhealthy dose of hype. Some parts are battle-tested – used daily in finance, logistics, and manufacturing. Others are speculative, barely peaking over the horizon of what’s possible. For some it sparks wonder and for others worry.
What matters most now isn’t just what AI can do — it’s how we choose to apply it, and whether we stay grounded in an understanding of the building blocks or not. By separating the disciplines, clarifying definitions, and resisting the urge to bundle every advance under one sweeping label, we gain the precision to actually use AI well.
The next few years won’t be about finding “the” AI breakthrough. They’ll be about stitching the right components together to solve real problems – responsibly, transparently, and with open and curious eyes.