![](https://magnuscmd.com/wp-content/uploads/2017/07/Hugo_Feature-Image-Web.png)
“Algorithms”. “Big Data”. “Artificial Intelligence” (AI). “Machine Learning” (ML). As of late, these are words that jumped from backroom technical conversations (in the mind of many probably directly from science-fiction) straight into business conference rooms worldwide. Topics which were once just highly technical fields of investigation, are now mainstream topics in the definition of both short and long-term business objectives. The energy sector is no different. In this article, we delve a bit in what these words really mean and how they are being applied to the energy sector.
ON ALGORITHMS
Let’s start with “algorithm“ (that magic word which, when used in any casual sentence, will make you look either smarter or just inadequately pompous). There’s not much to it really: an algorithm is an ordered set of unambiguous, executable steps that defines an (ideally) terminating process. If you think this looks much like that old cooking recipe from your grandmother, that’s because it does. But is it that recipe an algorithm? Probably not. Why? Primitives and Precision.
In this context, primitives are all the sets of rules that underlie the unequivocal understating of the instructions, for example, in the case of your old recipe it could the language in which the instructions are written, and in the case of a computer it could be the principles required to read them and execute them.
Precision refers to the unambiguity of the instructions. The set of instructions must be such that, given the same set of primitives and executing conditions, the result is the exact same. If your recipe said – “a bit of salt while heating” – most likely the result would be different with each cook (and even with each run/execution). But if it said – “1.87g of salt released in the superficial centre of a cylindrical vessel of water at 42.3ºC (measured in the inner side of the vessel’s walls)” – the result would most likely be the same if the instructions were correctly implemented.
ON “BIG” DATA
“We are a data/information-centred/oriented organization”. You probably heard this sentence somewhere by now and, in fact, it has been resonating so much lately, that it has even made professional categories like that of “data janitor” a very socially valued pursuit.
But really, what does it mean? For starters, data and information are different things entirely. Information is the end result of processing several datum (plural data), i.e. individual facts/elements. So, the first corollary is that, being able to extract valuable information from isolated bits of data can be a “science” in itself.
Nevertheless, this is not new, as individuals (but specially organizations) have since times immemorial transferred information between themselves. So, what changed? The amount. Last month, almost 3500 photos were uploaded to Facebook… every second! The acquisition speed, the topological diversity or just the sheer magnitude of the raw data some organizations are now capturing, together with the processed information thereof, cannot be dealt with using traditional tools and systems. The processing is so extensive and complex, that it requires distributing both processing power and storage throughout several machines (and sometimes groups of machines), which don’t necessarily have to be in the same physical location.
Nevertheless, although we all can agree that 3500 photos per second qualifies as “big” data, where’s the lower limit between “big” and “traditional”? There’s no real answer to this question, therefore the term “big data” keeps being used (and sometimes abused). One thing is clear: “data” is the new sexy.
ON ARTIFICIAL INTELLIGENCE
So, first things first: AI is not new. But even though it already has its age, one doesn’t need to know much about the subject to intuit that the term alone indicates that it must be a field wrapped in controversy. Try, for instance, to define the word “intelligence”. Not easy, right? Now add another variable: if something behaves in an apparent “intelligently” manner, is it already intelligent? This may seem an irrelevant question, but it’s in fact fundamental to the matter.
If one says – “dogs are intelligent” – doesn’t this mean that, if one create a robot that behaves in a way that someone else cannot differentiate between a real dog and such machine then, to that person, such machine IS a dog and therefore intelligent? If so, that conclusion should be extrapolatable to Humans as well. The man considered to be the father of the modern computer thought so as well and even designed a test to validate it. Want to try it? Talk to Cleverbot or Mitsuku for instance and consider: if it was a WhatsApp chat with a stranger, would you notice it was a machine?
So far, the consensus (although not without its dissidents) is that no machine has passed the Turing test. Not because they were not confused with a real person (we’re way past that by now) but because no machine does it in every single situation or adapts fast enough in its own way without help (to be fair to machines neither do we human most of the times).
This led to a differentiation between Artificial General Intelligence (AGI for short, also known as strong AI) and the creation of “intelligent” systems hyper-focused on specific tasks (also known as weak AI). What does this mean? It means the difference between a generic chatbot (like the ones I mentioned above) and one who has learned to lead a negotiation, help children learn mathematics, stimulate a person to exercise or help a maintenance worker solve a fault.
ON MACHINE LEARNING
Learning, like intelligence, covers a broad range of processes, which makes it difficult to define. So just like AI, we can expect some discrepancies when it comes to define ML. Nevertheless, if you consider learning to be something like “gaining knowledge, understanding and modifying a behaviour through experience or instruction” and you imagine a machine doing it, then you can get a good picture of what it is.
ML has improved impressively the “weak AI”. Very broadly, it’s a set of algorithms that, given certain conditions, allows machines to change (without human interference) their structure, program or data, based on its own inputs or on external information, with the purpose of improving (usually very specific) behavioural performances. For example: if, after each use, the performance of an “intelligent” autonomous vacuum-cleaner improves, then we can consider that the machine has effectively “learned”. But how is this done? Well the problems associated with decision making by machines have been approached in several ways throughout the years, and have converged in different ways in this discipline which now combines several elements:
- Mathematics: from statistics, to probability to Game Theory
- Brain Models: simulating the neural networks (connections between nodes) in the brain is one of the more mainstream and effective methods for ML
- Psychological Models: evaluating whether the concept of “reward” affects (and how) the machine’s performance in the long run or when engaged in a conflict
- Evolutionary models: using a sort of selection processes (mimicking natural selection), through numerous iterations we generate an ever more optimal solution to a problem
From Google to your physical fashion store, these types of programs and systems are ubiquitous nowadays. We can find them in: general science, speech recognition, face recognition (computer vision), tracking, surveillance, the stock market, digital security, robot control, etc.
ML using neural networks is in fact so effective, that it has evolved to a layered system called Deep Learning (DL). Google’s DeepMind or IBM Watson are examples of such systems.
FOR ENERGY
It’s not hard to imagine how these tools can be used to improve the performance, efficiency or reliability of the several faces of the energy industry. From production, to distribution, to management, to demand, to markets – their uses are now far reaching. Let’s take a peak.
Objects are only as good as their materials and construction but the optimization of materials, design and construction can take years. And it used to take years. But just like in the field of general science or pharma, “Big Data” and ML are now being used in engineering to improve the development of a broad range of energy devices – from batteries, to photovoltaic cells, to gas turbines.
Renewables are totally dependent on the weather. Although pure statistical/ML models are not the main solution in the dynamical weather forecasting systems used by meteorologists, this didn’t prevent IBM from launching a self-learning weather model and renewable forecasting technology to improve solar forecasts. Other uses include improved solar tracking to bring solar production to its maximum potential.
If we consider another famous acronym nowadays – IoT (a.k.a. Internet-of-Things) – for instance, with such a considerably vast network of (semi-)autonomous sensors and devices, the combination of large data systems (“big data”) and machine learning can help companies/institutions take the bazillions of data points they have and compact them into meaningful information, using the same premise from the other markets mentioned above: review and analyse the data to find patterns or similarities that can be learned from, in order to improve autonomous (as well as traditional) decision-making systems.
Under this scope, in a clear application of “Big Data”, companies like AutoGrid create software that through the assimilation of large amounts of energy data (such as: energy consumption, sensors, transformers, generators, outages, etc.), can generate automated predictions, optimize the performance of grid-connected devices, and monitor energy usage trends.
If we enter the industrial scope, companies like SparkCognition, use ML to improve the solution of two problems across a broad range of industries:
- Predict the likelihood of failure events (like a broken bearing in a gas turbine, or a blade failure in a wind turbine) before they occur, through an ever-improving detection of irregular patterns;
- Improve maintenance and operations through natural language processing, by processing unstructured data to easily search through incident reports and particularly to interface with machine applications.
Early this year, Google’s DeepMind discussed with the UK’s National Grid the possibility of using artificial intelligence to help balance energy supply and demand in Britain, for instance through the maximization of renewables through using machine learning to predict peaks in demand and supply. DeepMind already has a history of helping with efficiency: their systems have decreased Google’s data centre cooling invoice by 40%!
And, since we’re talking about energy – why not Oil. Operators constantly need to decide what controls to give to the wells to optimize the net present value. Also, during drilling, several streams of data are evaluated to ensure the best possible result. The use of Data and ML in the oil industry has already well over a decade.
It’s only the beginning for this fast-growing industry. Every day we generate more and more data, which can be used to optimize further and further the ever-evolving machine learning algorithms. There’s no knowledge of how far we can go.
Don’t be surprised if the next “energy revolution” is actually led by machines.
Hugo Martins | Analyst
If you found it interesting, please share it!
Recent Articles