Sunday, May 6, 2012

Future Enterprise- The Future of Algorithms

Future Enterprise – The Future of Algorithms
Algorithms are taking over the world – at least the computational part of it- and that could be a good thing.
In a real sense the rise of algorithms is a sign of human intellectual maturity in terms of our capacity as a society to manage technology and science at a sophisticated level; representing the coming together of our mastery of computational science, together with the capacity to abstract the key essence of a process- to generalise and commoditise it.
The ubiquity of algorithms is in fact the next logical step in our technological evolution as a species and perhaps marks our evolution towards super-species status.

Algorithms translate a process into instructions that a computing machine can understand, based on a mathematical, statistical and logical framework. They are usually developed to minimise and rigorise the computing steps involved in a process or formula and therefore maximise its solution efficiency in terms of computing resources, while at the same time improving its accuracy and verifiability.

Algorithms come in all shapes and sizes and have been around a long time- well before the official computer age. Originally invented by Indian mathematicians, they were documented by a Muslim scholar of the 9th century- al-Khwarizmi and later applied by Euclid and Newton to assist in the ormalisation of their theories of geometry and forces of nature.
In the future almost every process or method will be converted to an algorithm for computational processing and solution, as long as it can be defined as a series of mathematical and logical statements, ideally capable of being run on a Turing machine. A Turing machine is a mathematical model of a general computing machine, invented by Alan Turing, which we use for our current computing requirements. Turing machines can come in a variety of flavours, including deterministic, quantum, probabilistic and non-deterministic, all of which can applied to solve different classes of problems.
But regardless, any computation, even those based on alternate logical models such as Cellular Automata or recursive programming languages, can also theoretically be performed on a Turing machine. The brain however, because of its enormous non-linear problem-solving capacity, has recently been classified as a Super-Turing machine, but the jury is still out as to whether it falls in a different computational class to the standard Turing model.

Many algorithms incorporating powerful standard mathematical and statistical techniques, such as error correction, matrix processing, random number generation, Fourier analysis, ranking, sorting and Mandelbrot set generation etc, were coded originally as computational computer routines, using languages dating from the 50s and 60s including- Fortran, Algol, Lisp, Cobol, C++ and PL1. Later common algorithms were also incorporated in mathematical libraries such as Mathematica making them easier to access and apply.

They have now infiltrated every application and industry on the planet, applied for example to streamline and rigorise operations in manufacturing, production, logistics and engineering. They cover standard operational control methods such as linear programming, process control and optimisation, simulation, queuing, scheduling and packing theory, critical path analysis, project management and quality control.

Engineers and scientists increasingly link them to AI techniques such as Bayesian and Neural networks, Fuzzy logic and Evolutionary programming, to optimise processes and solve complex research problems.
But over time, following the flow of computerisation, the ubiquitous algorithm has extended into every field of human endeavour including- business and finance, information technology and communication, robotics, design and graphics, medicine and biology, ecosystems and the environment and astronomy and cosmology; in the process applying data mining, knowledge discovery and prediction and forecasting techniques to larger and larger datasets.

Indeed, whenever new technologies emerge or mature, algorithms inevitably follow, designed to do the heavy computational lifting, allowing developers to focus on the more creative aspects.

Other algorithmic applications now cover whole sub-fields of knowledge such as- game theory, machine learning, adaptive organisation, strategic decision- making, econometrics, bioinformatics, network analysis and optimisation, resource allocation, planning, supply chain management and traffic flow logistics.

In addition, more and more applications are being drawn into the vortex of the algorithm which were once the province of professional experts including- heart and brain wave analysis, genome and protein structure research, quantum particle modelling, formal mathematical proofs, air traffic and transport system control, weather forecasting, automatic vehicle driving, financial engineering, stock market trading and encryption analysis.

A number of such areas also involve high risk to human life, such as heavy machine operation, automatic vehicle and traffic control and critical decisions relating to infrastructure management such as dams, power plants, grids, rolling stock, bridge and road construction and container loading.
The Web of course is the new playground for algorithms and these can also have far reaching impacts.
For example in 2010, the Dow Jones Industrial Average dropped 900 points in a matter of minutes in what is now known as a Flash Crash. It appears that for a few minutes several algorithms were locked in a death dance, in much the same manner as two closely bound neutron stars before implosion, triggering a massive collapse in the value of the US stock market. It was a wake-up call to the fact that in any complex system involving multiple feedback loops, it has been mathematically proven that unforeseen combinations of computational events will take place sooner or later.

Even today’s news headlines are shaped by algorithms. Not only is it normal for Internet users to select feeds relating to the personalised content they prefer, perhaps on a feel-good basis, but also stories are selected and curated by search-engine algorithms to suit categories of advertisers. This raises the issue of algorithms being applied to create different bubble realities that may not reflect the priorities of society as a whole- such as global warming, democracy at risk or a critical food shortage.


A major dimension of the impact of algorithms is the issue of job obsolescence. It is not just the unskilled jobs of shop assistants, office admin and factory workers, marketing and research assistants that are at risk, but middle-class, white-collar occupations, from para-legals to journalists to news readers. As algorithms become smarter and more pervasive this trend will extend up the food chain to many higher level management categories, where strategic rather than operational decision-making is primary.

And so we come to the millions of smartphone apps which are now available to support us in every aspect of our daily activities, but can also lead us to the dark side of a big brother society, where through the pervasive monitoring of location, shopping transactions and social connections, every individual’s life and timeline can be tracked and analysed using algorithms, with everyone eventually becoming a person of interest in the global society.

Social networks trade personal information to generate revenues, while the individual loses their right to privacy but doesn’t receive any compensation. Certainly the area of apps governing personal and lifestyle choice is now being invaded by ubiquitous algorithms in the form of Recommendation Systems. Much of the information garnered from social networks is filtered and personalised to guide lifestyle and entertainment; selecting an exercise regime, a relationship, online book or author, and restaurant or movie choice based on past experience and behavioural profiles. And already a third of US shoppers use the internet to make a buying decision.

These subliminal recommender systems represent the beginning of an always-on individual omnipresence, tracking your car on a GPS or recognising your face in a photograph, now combined with an AI generated virtual assistant such as Siri. More recent algorithms also have the potential to combine information to infer further hidden aspects of lifestyle.

But the real problem with such Recommender systems is their poor record at forecasting, particularly in areas of complex human behaviour and desires. And in addition the inner logic governing their Delphic predictions is generally hidden and opaque; meaning that guesswork is conveniently covered up while decision-making becomes dumbed down.
Enterprises such as banks, insurance companies, retail outlets and Government agencies compete to build algorithms to feed insatiable databases of personal profiles; constantly analysed for hidden consumer patterns to discover who is most likely to default on a loan, buy a book, listen to a song or watch a movie. Or who is most likely to build a bomb?
The rise of the algorithms embedded in our lives could not have occurred without the surge in the inter-connected, online existence we lead today. We are increasingly part of the web and its numerous sub-networks, constantly in a state of flux.
A supermarket chain can access detailed data not only from its millions of loyalty cards but also from every transaction in every branch. A small improvement in efficiency can save millions of dollars. The mushrooming processing power of computers means that the data collected can be stored and churned continuously in the hunt for persons of interest. So who is to stop them if consumer groups aren’t vigilant?

This is not too much of a nuisance when choosing a book or a movie, but it can be a serious problem if applied to credit rating assessment or authorisation of healthcare insurance. If an algorithm is charged with predicting whether an individual is likely to need medical care, how might that affect their quality of life? Is a computer program better able to calculate kidney transplant survival statistics and decide who should receive a donor organ? Algorithms are now available to diagnose cancer and determine the optimum heart management procedure using the latest worldwide research. Can human doctors compete in the longer term and will algorithms be better at applying game theory to determine the ethical outcomes of who should live or die?

The ethics of data mining is not limited to privacy or medical issues. Should the public have more control over the application of algorithms that guide killer drones towards human targets. Eventually computer-controlled drones will rule the skies, potentially deciding on targets independently of humans, as their AI selection algorithms improve. But if an innocent civilian is mistaken for the target or coordinates are accidently scrambled, can the algorithm be corrected in time to avoid collateral damage?

So algorithms must have built-in adaptation strategies to stay relevant like every other artifact or life form on the planet. If not they could become hidden time bombs. They will require ultra-rigorous testing and maintenance over time because they can become obsolete like any process governed by a changing environment – such as the y2k computer bug and the automatic trading anomaly. If used for prediction and trend forecasting they will be particularly risky to humans. If the environment changes outside the original design parameters, then the algorithm must be also immediately adapted. otherwise prediction models and simulators such as the proposed FuturICT global social observatory might deliver devastatingly misleading forecasts.

As mentioned, a number of artificial intelligent techniques depend on algorithms for their core implementation including: Genetic algorithms, Bayesian networks, Fuzzy logic, Swarm intelligence, Neural networks and Intelligent agents.

The future of business intelligence lies in systems that can guide and deliver increasingly smart decisions in a volatile and uncertain environment. Such decisions incorporating sophisticated levels of intelligent problem-solving will increasingly be applied autonomously and within real time constraints to achieve the level of adaptability required to survive, particularly now within an environment of global warming.

In this new adaptive world the algorithm is therefore a two-edged sword. On the one hand it can create the most efficient path to implementing a process. But on the other, if it is inflexible and incapable of adapting, for example choosing to continue to manufacture large fossil fuel burning vehicles, it can lead to collapse, as in the case of Ford and GM.
Good decision-making is therefore dependent on a process of adapting to changes in the marketplace which involves a shift towards predictive performance management; moving beyond simple extrapolation metrics to a form of artificial intelligence based software analysis and learning, such as offered by evolutionary algorithms.

Life depends on adaptive algorithms as well – assessing distance to a food source encoded in the dance of a bee, determining the meaning of speech or acoustic sounds, discriminating between friend or foe., the ability of a bird to navigate based on the polarisation angle of the sun or a bat avoiding collisions based on split-second acoustic calculations.
These algorithms have taken millions of years to evolve and they keep evolving as the animal adapts in relation to its environment.
But here’s the problem for man-made algorithms. Very few have been designed with the capacity to evolve without direct human intervention, which may come too late as in the case of an obsolete vaccine or inadequately encrypted file.

The rate of change impacting enterprise environments in the future will continue to accelerate, forcing the rate of decision making to increase in response autonomously, with minimal human intervention. This has already occurred in advanced control, communication and manufacturing systems and is becoming increasingly common at the operational level in e-business procurement, enterprise resource planning, financial management and marketing applications, all of which are dependent on a large number of algorithms.

Dynamic decision support architectures will be required to support this momentum and be capable of drawing seamlessly on external as well as internal sources of knowledge to facilitate focussed decision capability.
Algorithms will need to evolve to underpin such architectures and act as a bulwark in this uncertain world, eventually driving it without human intervention; but only if they are self- verifying within the parameters of their human and computational environment.