Friday, April 30, 2010

Future Enterprise- Future Brain Architecture

Is today’s enterprise, including its IT acolytes, missing something very obvious and vitally important in its current management mindset or is it just an inability by a traditionally conservative constituency, to accept the radical paradigm shift involved?

Enterprise IT is beginning to dip its toe in the water and borrow some of its inspiration from biological models. For example, a number of the most valuable AI techniques routinely applied in business- genetic algorithms, neutral networks, DNA and swarm computation, are biologically based, as is the concept of the organisation as a complex ecosystem, rather than a rigid hierarchical structure, largely disconnected from its environment.

Networks are also getting a look-in. Complex decision-making, using elements of autonomous, self-organising and intelligent networks, incorporating complex feedback loops to monitor operational performance and enhance relationships with customers and suppliers, are now being trialled.

But the current enterprise management model is still missing the big picture- the shift towards an efficient, self-regulating, self-organising, self-evolving framework, so critical for survival in a future fast-moving, uncertain physical and social environment.
The most efficient blueprint for such an architecture and one honed over billions of years and governing all animal life, is the living brain; in particular the advanced human brain.

For the last thirty years, since the advent of computerised imaging techniques, scientists have been trying to prise open the secrets of the brain’s incredible power and flexibility. Not just how it computes so efficiently, but its ability to adapt, evolve and manage its 100 billion neurons and dozens of specialised structures, as well as all the relationships of the body’s incredibly rich cellular processes, organs and bio-systems. It has also mastered the capacity to flexibly adapt to a vast number of environmental challenges- both physical and social, while at the same time continuing to evolve and grow its intelligence at the individual, group and species level.

If only it was possible to harness this most complex object in the universe, to manage our own still-primitive, nascent organisational structures.

So what’s the secret to the brain’s incredible success in guiding the human race through its evolutionary odyssey? Well finally the creativity and perseverance of countless dedicated scientists is starting to pay dividends, with two recent major conceptual breakthroughs-
A Unified Theory of the Brain and the key to the Sub-conscious Brain.

Current theories of the mind and brain have primarily focussed on defining the mental behaviour of others using the brain’s mirror neurons. These are a set of specialized cells that fire when an animal observes an action performed by another. Therefore, the neurons ‘mirror’ or reflect the behaviour of the other, as though the observer was itself acting. Such neurons have been directly observed in primates and more recently humans and are believed to exist in other species, such as birds.

However despite an increasing understanding of the role of such mechanisms in shaping the evolution of the brain, current theories have failed to provide an overarching or unified framework, linking all mental and physical processes- until recently. A group of researchers from the University College London headed by neuroscientist Karl Friston, have now derived a mathematical framework that provides a credible basis for such a holistic theory.

This is based on Bayesian probability theory, which allows predictions to be made about the validity of a proposition or phenomenon based on the evidence available. Friston’s hypothesis builds on an existing theory known as the “Bayesian Brain”, which postulates the brain as a probability machine that constantly updates its predictions about its environment based on its perception, memory and computational capacity. In other words it is constantly learning about its place in the world by filtering input knowledge through a statistical assessment process.

The crucial element in play, is that these encoded probabilities are based on cumulative experience or evidence, which is updated whenever additional relevant data becomes available; such as visual information about an object’s location or behaviour. Friston’s theory is therefore based on the brain as an inferential agent, continuously refining and optimising its model of the past, present and future.

This can be seen as a generic process applied to all functions and protocols embedded in the brain; continually adapting the internal state of its myriad neural connections, as it learns from its experience. In the process it attempts to minimise the gap between its predictions and the actual state of the external environment on which its survival depends.

Minimising this gap or prediction error is crucial and can be measured in terms of the concept of ‘free energy’ used in thermodynamics and statistical mechanics. This is defined as the amount of useful work that can be extracted from a system such as an engine and is roughly equivalent to the difference between the total energy provided by the system and its waste energy or entropy. In this case the prediction error is equated to the free energy of the system, which must be minimised as far as practical if the organism is to continue to develop.

All functions of the brain have therefore evolved to reduce predictive errors to enhance the learning process. When the predictions are right, the brain is rewarded by being able to respond more efficiently and effectively, using less energy. If it is wrong, additional energy is required to find out why and formulate a better set of predictions.

The second breakthrough has come from a better understanding, again through neuro-imaging, of the brain’s subconscious processes. It’s been revealed that the brain is incredibly active, even when a person is not purposely thinking or acting, for example when daydreaming or asleep. It is in fact keeping subliminal watch, communicating, synchronising and prepping its networks for a conscious future action or response; continuously organising and refining its neural systems such as the cortex and memory; in the process using up to twenty times as much energy as the conscious mode of operation requires. This mechanism is called the brain’s default mode network or DMN and has only been recently recognised as a cogent system in its own right.

Now fast forward to the future enterprise, running under an architecture that incorporates these two knowledge breakthroughs. What are the additional benefits over the old model? Not too difficult to deduce.

Any organisation that is capable of constantly and seamlessly monitoring itself in relation to its internal functions and external environment; assessing its performance against its predictions and requirements in real-time through efficient feedback mechanisms; being aware of changes in its environment and opportunities to improve its performance and productivity; self-optimising its functions and goals; self-correcting its actions, searching autonomously for the best solutions for performing complex decision-making and constantly building on its experience and intelligence – must mark a vast improvement over the current model.

Not only that- this model has been tested and operationally proven in the cauldron of evolution over the past 5 billion years. Not a bad benchmark!
Too difficult to introduce into mainstream enterprise operations? I don’t think so, not in an era when we can build the world wide web, space-stations, large particle colliders, models of galaxies and the multiverse, apply genetic engineering techniques to solve diseases, grow new organs from stem cells and put a man on Mars!

Monday, April 12, 2010

Future Enterprise- Rebirthing Hal

The arrival of super smart evolutionary computers, capable of autonomous reasoning, learning and emulating the human-like behaviour of the mythical HAL in Arthur C. Clarke’s Space Odyssey 2001 is imminent.

The Darwinian evolutionary paradigm has finally come of age in the era of super -computing. The AI evolutionary algorithm which now guides many problem solving and optimisation processes, is also being applied to the design of increasingly sophisticated computing systems. In a real sense, the evolutionary paradigm is guiding the design of evolutionary computing, which in turn will lead to the development of more powerful evolutionary algorithms. This process will inevitably lead to the generation of hyper-smart computing systems and therefore advanced knowledge; with each evolutionary computing advance catalysing the next in a fractal process.

Evolutionary design principles have been applied in all branches of science and technology for over a decade, including the development of advanced electronic hardware and software, now incorporated in personal computing devices and robotic controllers.
One of the first applications to use a standard genetic algorithm was the design of an electronic circuit which could discriminate between two tone signals or voices in a crowded room. This was achieved by using a Field Programmable Gateway Array or FPGA chip, on which a matrix of transistors or logic cells was reprogrammed on the fly in real time. Each new design configuration was varied or mutated and could then be immediately tested for its ability to achieve the desired output- discriminating between the two signal frequencies.

Such evolutionary-based technologies provide the potential to not only optimise the design of computers, but facilitate the evolution of self-organisational learning and replicating systems that design themselves. Eventually it will be possible to evolve truly intelligent machines that can learn on their own, without relying on pre-coded human expertise or knowledge.

In the late forties, John von Neumann conceptualised a self-replicating computer using a cellular automaton architecture of identical computing devices arranged in a chequerboard pattern, changing their states based on their nearest neighbour. One of the earliest examples was the Firefly machine with 54 cells controlled by circuits which evolved to flash on and off in unison.

The evolvable hardware that researchers created in the late 90’s and early this century was proof of principle of the potential ahead. For example, a group of Swiss researchers extended Von Neumann's dream by creating a self-repairing, self-duplicating version of a specialised computer. In this model, each processor cell or biomodule was programmed with an artificial chromosome, encapsulating all the information needed to function together as one computer and capable of exchanging information with other cells. As with each biological cell, only certain simulated genes were switched on to differentiate its function within the body.

A stunning example of the application of Darwinian principles to the mimicking of life was development of the CAM-Cellular Automata Machine Brain in 2000. It contained 40 million neurons, running on 72 linked FGPAs of 450 million autonomous cells. Also the first hyper-computer- HAL-4rw1 from Star Bridge Systems reached commercial production in 2000. Based on FPGA technology it operated at four times the speed of the world's fastest supercomputer.
And at the same time NASA began to create a new generation of small intelligent robots called ‘biomorphic’ explorers, designed to react to the environment in similar ways to living creatures on earth.

Another biological approach applied to achieve intelligent computing was the neural network model. Such networks simulate the firing patterns of neural cells in the brain, which accumulate incoming signals until a discharge threshold is reached, allowing information to be transmitted to the next layer of connected cells. However, such digital models cannot accurately capture the subtle firing patterns of real-life cells, which contain elements of both periodic and chaotic timing. However the latest simulations use analogue neuron circuits to capture the information encoded in these time-sensitive patterns and mimic real-life behaviour more accurately.
Neural networks and other forms of biological artificial intelligence are now being combined with evolutionary models, taking a major step towards the goal of artificial cognitive processing; allowing intelligent computing systems to learn on their own and become experts in any chosen field.

Eventually it will be possible to use evolutionary algorithms to design artificial brains, augmenting or supplanting biological human cognition. This is a win-win for humans. While the biological brain, with its tens of billions of neurons each connected to thousands of others, has assisted science to develop useful computational models, a deeper understanding of computation and artificial intelligence is also providing neuroscientists and philosophers with greater insights into the nature of the brain and its cognitive processes.

The future implications of the evolutionary design paradigm are therefore enormous. Universal computer prototypes capable of continuous learning are now reaching commercial production. Descendants of these systems will continue to evolve, simulating biological evolution through genetic mutation and optimisation, powered by quantum computing. They will soon create capabilities similar to those of HAL in Arthur Clarke's "Space Odyssey 2001"- and only a few decades later than predicted.

However the reincarnation of the legendary HAL may in fact be realised by a much more powerful phenomena incorporating all current computing and AI advances - the Intelligent World Wide Web. As previously discussed, this multidimensional network of networks, empowered by human and artificial intelligence and utilising unlimited computing and communication power, is well on the way to becoming a self-aware entity and the ultimate decision partner in our world.

Perhaps HAL is already alive and well.