Sunday, May 6, 2012

Future Enterprise- The Future of Algorithms

Future Enterprise – The Future of Algorithms
Algorithms are taking over the world – at least the computational part of it- and that could be a good thing.
In a real sense the rise of algorithms is a sign of human intellectual maturity in terms of our capacity as a society to manage technology and science at a sophisticated level; representing the coming together of our mastery of computational science, together with the capacity to abstract the key essence of a process- to generalise and commoditise it.
The ubiquity of algorithms is in fact the next logical step in our technological evolution as a species and perhaps marks our evolution towards super-species status.

Algorithms translate a process into instructions that a computing machine can understand, based on a mathematical, statistical and logical framework. They are usually developed to minimise and rigorise the computing steps involved in a process or formula and therefore maximise its solution efficiency in terms of computing resources, while at the same time improving its accuracy and verifiability.

Algorithms come in all shapes and sizes and have been around a long time- well before the official computer age. Originally invented by Indian mathematicians, they were documented by a Muslim scholar of the 9th century- al-Khwarizmi and later applied by Euclid and Newton to assist in the ormalisation of their theories of geometry and forces of nature.
In the future almost every process or method will be converted to an algorithm for computational processing and solution, as long as it can be defined as a series of mathematical and logical statements, ideally capable of being run on a Turing machine. A Turing machine is a mathematical model of a general computing machine, invented by Alan Turing, which we use for our current computing requirements. Turing machines can come in a variety of flavours, including deterministic, quantum, probabilistic and non-deterministic, all of which can applied to solve different classes of problems.
But regardless, any computation, even those based on alternate logical models such as Cellular Automata or recursive programming languages, can also theoretically be performed on a Turing machine. The brain however, because of its enormous non-linear problem-solving capacity, has recently been classified as a Super-Turing machine, but the jury is still out as to whether it falls in a different computational class to the standard Turing model.

Many algorithms incorporating powerful standard mathematical and statistical techniques, such as error correction, matrix processing, random number generation, Fourier analysis, ranking, sorting and Mandelbrot set generation etc, were coded originally as computational computer routines, using languages dating from the 50s and 60s including- Fortran, Algol, Lisp, Cobol, C++ and PL1. Later common algorithms were also incorporated in mathematical libraries such as Mathematica making them easier to access and apply.

They have now infiltrated every application and industry on the planet, applied for example to streamline and rigorise operations in manufacturing, production, logistics and engineering. They cover standard operational control methods such as linear programming, process control and optimisation, simulation, queuing, scheduling and packing theory, critical path analysis, project management and quality control.

Engineers and scientists increasingly link them to AI techniques such as Bayesian and Neural networks, Fuzzy logic and Evolutionary programming, to optimise processes and solve complex research problems.
But over time, following the flow of computerisation, the ubiquitous algorithm has extended into every field of human endeavour including- business and finance, information technology and communication, robotics, design and graphics, medicine and biology, ecosystems and the environment and astronomy and cosmology; in the process applying data mining, knowledge discovery and prediction and forecasting techniques to larger and larger datasets.

Indeed, whenever new technologies emerge or mature, algorithms inevitably follow, designed to do the heavy computational lifting, allowing developers to focus on the more creative aspects.

Other algorithmic applications now cover whole sub-fields of knowledge such as- game theory, machine learning, adaptive organisation, strategic decision- making, econometrics, bioinformatics, network analysis and optimisation, resource allocation, planning, supply chain management and traffic flow logistics.

In addition, more and more applications are being drawn into the vortex of the algorithm which were once the province of professional experts including- heart and brain wave analysis, genome and protein structure research, quantum particle modelling, formal mathematical proofs, air traffic and transport system control, weather forecasting, automatic vehicle driving, financial engineering, stock market trading and encryption analysis.

A number of such areas also involve high risk to human life, such as heavy machine operation, automatic vehicle and traffic control and critical decisions relating to infrastructure management such as dams, power plants, grids, rolling stock, bridge and road construction and container loading.
The Web of course is the new playground for algorithms and these can also have far reaching impacts.
For example in 2010, the Dow Jones Industrial Average dropped 900 points in a matter of minutes in what is now known as a Flash Crash. It appears that for a few minutes several algorithms were locked in a death dance, in much the same manner as two closely bound neutron stars before implosion, triggering a massive collapse in the value of the US stock market. It was a wake-up call to the fact that in any complex system involving multiple feedback loops, it has been mathematically proven that unforeseen combinations of computational events will take place sooner or later.

Even today’s news headlines are shaped by algorithms. Not only is it normal for Internet users to select feeds relating to the personalised content they prefer, perhaps on a feel-good basis, but also stories are selected and curated by search-engine algorithms to suit categories of advertisers. This raises the issue of algorithms being applied to create different bubble realities that may not reflect the priorities of society as a whole- such as global warming, democracy at risk or a critical food shortage.


A major dimension of the impact of algorithms is the issue of job obsolescence. It is not just the unskilled jobs of shop assistants, office admin and factory workers, marketing and research assistants that are at risk, but middle-class, white-collar occupations, from para-legals to journalists to news readers. As algorithms become smarter and more pervasive this trend will extend up the food chain to many higher level management categories, where strategic rather than operational decision-making is primary.

And so we come to the millions of smartphone apps which are now available to support us in every aspect of our daily activities, but can also lead us to the dark side of a big brother society, where through the pervasive monitoring of location, shopping transactions and social connections, every individual’s life and timeline can be tracked and analysed using algorithms, with everyone eventually becoming a person of interest in the global society.

Social networks trade personal information to generate revenues, while the individual loses their right to privacy but doesn’t receive any compensation. Certainly the area of apps governing personal and lifestyle choice is now being invaded by ubiquitous algorithms in the form of Recommendation Systems. Much of the information garnered from social networks is filtered and personalised to guide lifestyle and entertainment; selecting an exercise regime, a relationship, online book or author, and restaurant or movie choice based on past experience and behavioural profiles. And already a third of US shoppers use the internet to make a buying decision.

These subliminal recommender systems represent the beginning of an always-on individual omnipresence, tracking your car on a GPS or recognising your face in a photograph, now combined with an AI generated virtual assistant such as Siri. More recent algorithms also have the potential to combine information to infer further hidden aspects of lifestyle.

But the real problem with such Recommender systems is their poor record at forecasting, particularly in areas of complex human behaviour and desires. And in addition the inner logic governing their Delphic predictions is generally hidden and opaque; meaning that guesswork is conveniently covered up while decision-making becomes dumbed down.
Enterprises such as banks, insurance companies, retail outlets and Government agencies compete to build algorithms to feed insatiable databases of personal profiles; constantly analysed for hidden consumer patterns to discover who is most likely to default on a loan, buy a book, listen to a song or watch a movie. Or who is most likely to build a bomb?
The rise of the algorithms embedded in our lives could not have occurred without the surge in the inter-connected, online existence we lead today. We are increasingly part of the web and its numerous sub-networks, constantly in a state of flux.
A supermarket chain can access detailed data not only from its millions of loyalty cards but also from every transaction in every branch. A small improvement in efficiency can save millions of dollars. The mushrooming processing power of computers means that the data collected can be stored and churned continuously in the hunt for persons of interest. So who is to stop them if consumer groups aren’t vigilant?

This is not too much of a nuisance when choosing a book or a movie, but it can be a serious problem if applied to credit rating assessment or authorisation of healthcare insurance. If an algorithm is charged with predicting whether an individual is likely to need medical care, how might that affect their quality of life? Is a computer program better able to calculate kidney transplant survival statistics and decide who should receive a donor organ? Algorithms are now available to diagnose cancer and determine the optimum heart management procedure using the latest worldwide research. Can human doctors compete in the longer term and will algorithms be better at applying game theory to determine the ethical outcomes of who should live or die?

The ethics of data mining is not limited to privacy or medical issues. Should the public have more control over the application of algorithms that guide killer drones towards human targets. Eventually computer-controlled drones will rule the skies, potentially deciding on targets independently of humans, as their AI selection algorithms improve. But if an innocent civilian is mistaken for the target or coordinates are accidently scrambled, can the algorithm be corrected in time to avoid collateral damage?

So algorithms must have built-in adaptation strategies to stay relevant like every other artifact or life form on the planet. If not they could become hidden time bombs. They will require ultra-rigorous testing and maintenance over time because they can become obsolete like any process governed by a changing environment – such as the y2k computer bug and the automatic trading anomaly. If used for prediction and trend forecasting they will be particularly risky to humans. If the environment changes outside the original design parameters, then the algorithm must be also immediately adapted. otherwise prediction models and simulators such as the proposed FuturICT global social observatory might deliver devastatingly misleading forecasts.

As mentioned, a number of artificial intelligent techniques depend on algorithms for their core implementation including: Genetic algorithms, Bayesian networks, Fuzzy logic, Swarm intelligence, Neural networks and Intelligent agents.

The future of business intelligence lies in systems that can guide and deliver increasingly smart decisions in a volatile and uncertain environment. Such decisions incorporating sophisticated levels of intelligent problem-solving will increasingly be applied autonomously and within real time constraints to achieve the level of adaptability required to survive, particularly now within an environment of global warming.

In this new adaptive world the algorithm is therefore a two-edged sword. On the one hand it can create the most efficient path to implementing a process. But on the other, if it is inflexible and incapable of adapting, for example choosing to continue to manufacture large fossil fuel burning vehicles, it can lead to collapse, as in the case of Ford and GM.
Good decision-making is therefore dependent on a process of adapting to changes in the marketplace which involves a shift towards predictive performance management; moving beyond simple extrapolation metrics to a form of artificial intelligence based software analysis and learning, such as offered by evolutionary algorithms.

Life depends on adaptive algorithms as well – assessing distance to a food source encoded in the dance of a bee, determining the meaning of speech or acoustic sounds, discriminating between friend or foe., the ability of a bird to navigate based on the polarisation angle of the sun or a bat avoiding collisions based on split-second acoustic calculations.
These algorithms have taken millions of years to evolve and they keep evolving as the animal adapts in relation to its environment.
But here’s the problem for man-made algorithms. Very few have been designed with the capacity to evolve without direct human intervention, which may come too late as in the case of an obsolete vaccine or inadequately encrypted file.

The rate of change impacting enterprise environments in the future will continue to accelerate, forcing the rate of decision making to increase in response autonomously, with minimal human intervention. This has already occurred in advanced control, communication and manufacturing systems and is becoming increasingly common at the operational level in e-business procurement, enterprise resource planning, financial management and marketing applications, all of which are dependent on a large number of algorithms.

Dynamic decision support architectures will be required to support this momentum and be capable of drawing seamlessly on external as well as internal sources of knowledge to facilitate focussed decision capability.
Algorithms will need to evolve to underpin such architectures and act as a bulwark in this uncertain world, eventually driving it without human intervention; but only if they are self- verifying within the parameters of their human and computational environment.

Thursday, January 12, 2012

Future Enterprise- The Future of The Internet

David Hunter Tow – Director of the Future Enterprise Research Centre, forecasts that within the next decade the Internet and Web may be at risk of splitting into a number of separate entities- fragmenting under technological, national, business and social pressures.

In its place may emerge a network of networks – continuously morphing- linking and fragmenting, with no central dominant domain backbone; instead a disconnected, random structure of networks with information channeled through uncoordinated switching stations and content hubs, controlled by a range of geopolitical, social and enterprise interests.

For authoritarian states such as China, North Korea, Iran and Syria as well as criminal cartels, this will facilitate the expansion of their operations, allowing them to circumvent exposure of illegal activities in much the same way as the current Darknet network.

Darknet- the alternate network of virtual channels that currently operates beneath the backbone of the Internet has long been a place for clandestine operations, by both criminal and state networks. It is also used as a tool by cyber authorities to provide evidence of DDoS, port scanning, worms and other malware; also allowing dissidents from repressive regimes to remain in touch with the outside world, providing protection to whistle blowers and hosting pirated movie and music sites- out of reach of traditional search engines.

Autocratic governments are also maintaining increasingly tight censorship over politically sensitive sites via controlled points of entry to their cyber fiefdoms, even to the extent of distorting current and historical events. Both China and Iran now have plans to establish their own Internet infrastructure to further strengthen the control and censorship of their populations and no doubt other authoritarian states will follow. But this power won’t be limited to dictator-run states. The increasing threat of Internet censorship via the proposed SOPA- Stop Online piracy Action legislation in the US, and now the exposure of the NSA's pervasive Cyberspy program, confirms the threat facing online privacy and freedom even within democratic nations and has motivated opposition by citizens and companies concerned about the risks of storing personal and confidential corporate data in US Clouds.

At the same time white hat hacker and pro-privacy groupsare  launching local wireless Meshnets without any centralsed control  as well as their own communication satellites linked to a grid of tracking stations in order to avoid such Government surveillance and interference, as discussed at the recent Chaos Communication Congress in Berlin.

But Apple, Facebook, Google, Amazon as well as Cable and Internet TV companies have already begun to fragment the web to support their own Walled Garden strategies of quarantining and manipulating membership data, applications, entertainment, search results and identities. Facebook membership data cannot be transferred to other social sites. Adobe’s Flash software as well as a number of developer applications were banned by Apple, which means the iPhone browser cannot display a large portion of the Internet. Likewise Amazon’s Kindle will only display books on sale or for rent by the company. Google fails to protect email privacy or adequately attribute search results to original sources.

Such social sites have become closed silos, similar in many respects to those of authoritarian sites such as China.

The more this type of restricted, proprietary architecture gains traction on the Web the more it will become fragmented and the easier it will be for criminal groups to exploit, placing the open and egalitarian charter of the future Internet at risk.

But there are compelling reasons why such closed silo strategies and gross invasion of citizen privacy, introduced by Governments and mega Web companies is likely to eventually collapse.

As outlined in previous blogs, physics ordains that information flows cannot be constrained and will eventually spread by pathways of least resistance, driven byconsumer demand, competitive pressure and technological advances. In addition, biological ecosystems with limited genetic variation are the most vulnerable to extinction. Companies within the cyber ecosphere are equally vulnerable- more susceptible to competition and rapid changes in their technological and social environments if open access to innovative ideas and information flows is restricted. And balkanisation of the Intenet is very bad for business- particularly US business as companies retreat from using vulnerable Cloud and Social Media services.

The emergence of the Semantic Web is also a catalyst for greater openness, facilitating the interpretation, linking and application of knowledge stored in millions of discrete databases across the Web. This is a vital advance in fostering greater transparency, flexibility and autonomy within the Cybersphere.

But the battle for web control and Internet supremacy is only just beginning, not only between the US and China but also involving all other nations in the newly emerging multi-polar world. The US still maintains the controlling votes in ICAAN - the Domain management company, despite many attempts to democratise its management.

But now the US will be forced to flex up and stop playing the role of alpha male in an increasingly equal and diverse information world.

By its obsession with maintaining technology dominance of critical assets such as the Web, particularly in a time of global warming, with an urgent need to effectively manage
global resources for all populations, the US is ironically accelerating the rise of alternate Internets and Webs.

China is charging ahead with alternate communication networks, as in most areas of new technology. After all its search engine - Baidu, already has 500 million users - almost as many as Google worldwide. Baidu works hand in glove with the Central Communist Party and is the ultimate arbiter of reality for its users, committed to working within the Government's paranoid censorship parameters constrained by a massive firewall of 50,000 Internet police. But with 200 million bloggers producing trillions of words a day as well as subscribers to RenRen and Seina Weibo- the equivalent of Facebook and Twitter, it’s becoming an increasingly tough call- even for a totalitarian government.

So now the momentum is building for a multi-Internet infrastructure as governments of all colours attempt to impose their will and dominate the evolution of the pre-eminent artefact of our civilisation, which may hold the key to the planet’s survival.

In the short term China cannot replicate the mega optic fibre cable, satellite and server networks of the present Internet, but it can deploy a mesh of alternate wireless channels linking its own network assets to other friendly systems, for example in Africa, South America, Iran and Russia; at the same time constructing a topology complete with their own domain servers. In addition, it will develop its own knowledge hubs while leveraging the existing core public assets such as the priceless science, engineering, social and economic databases of the current Web.

The new US Net Neutrality rules recently introduced to prevent balkanisation are already under heavy fire, with broadband providers prevented from engaging in anti-competitive behaviour by blocking content or slowing access to sites and applications, as Comcast attempted to do in 2007 with the BitTorrent "peer-to-peer" protocol.

But as the pressure to bypass the new rules to allow a multi-speed Internet has increased, so too have the tensions been building between the major Social Web, Broadband and Cloud providers- Google, Apple, Facebook, Cisco, Verizon, Amazon, VMware etc. Cloud vendors have been erecting a new set of proprietary firewalls, with VMware the exception, adopting an open architecture to encourage developers to leverage and extend its technology.

The more such closed architecture with differing operational and security standards gain traction however, the higher the risk that the CloudSphere will eventually become fragmented, less productive and more vulnerable to hacking.

Meanwhile, despite its financial problems, the EU plans to spend billions on boosting broadband speeds to increase productivity and competitiveness. The European Commission will spend 9 billion euros to rollout super-fast broadband infrastructure and services across the European Union to help create a single market for digital public services by 2020 for half its population including- e-health, intelligent energy and cyber security applications, assisting utility companies, construction cooperatives, public authorities and rural users.

New Internet Architecture options are also on the horizon, with a number of innovations in train, forecast to improve the Web’s flexibility while avoiding fragmentation. But these could be put in jeopardy by the US’s intransigence over ceding control.

For example the National Science Foundation has established the Future Internet Architecture program- Nebula, to better secure Internet- Id verification, data safety, mobile access and cloud computing. Google is also setting up a new Web architecture to improve search effectiveness.

At a recent Internet Conference run by the European Paradiso Group, a number of advanced options were discussed including- Internet routing algorithms with quantum options to provide more efficient and secure routing paths; flexible spectrum allocation; a smart Internet environment enabled by networked sensors; a content and context aware Web combined with self-organising and self-adaptive capabilities to provide more autonomy and optimisation.

In addition, the proposed Named Data Networking (NDN) architecture shifts the communication emphasis from today's focus on resource addresses, servers, and hosts, to one oriented to content and context. By identifying data objects instead of just locations, NDN transforms data into the primary Internet focus. While the current Internet secures the channel or path between two communication points, adding data encryption as an extra, NDN will implicitly secure content security and trust.

These and other advances will result in the emergence of Internet Mark 3.0, following its early incarnation as a simple packet data transfer system and then transforming into a pervasive information search powerhouse over the last decade

But Internet 3.0 will only emerge if fragmentation of its infrastructure and the ensuing chaos is avoided

Internet Mark 3.0 will offer- complex multidimensional and ultra-efficient processing and the dissemination of realtime, multi-services and decision-making based on content and context– not just physical objects.

Such capability will drive societal transformation at hyper speed, catalysing - urbanisation, mobility, vastly improved health and education services and all forms of virtual reality, as well as the beginning of a truly symbiotic Web-Human partnership in complex decision-making.

The Future of the Web has been discussed in a number of previous blogs by the author.

In summary-

By 2015 Web 2.0- The Social Web- will have developed into a complex multimedia interweaving of ideas, knowledge and social commentary, connecting over three billion people on the planet.

By 2025, Web 3.0- The Semantic Web- will have made many important contributions to new knowledge through network science, logical inference artificial intelligence. It will be powered by a seamless, computational mesh, enveloping and connecting human and artificial life and will encompass all facets of our social and business lives- always on and available to manage every need.

By 2035, Web 4.0- the Intelligent Web- will be ubiquitous- able to interact with the repository of all available knowledge of human civilisation- past and present, digitally coded and archived for automatic retrieval and analysis. Human intelligence will have co-joined with advanced forms of artificial intelligence, creating a higher or meta-level of knowledge processing. This will be essential for supporting the complex decision-making and problem solving capacity required for civilisation's future survival and progress.

Also by 2035 the last of the enterprise walled gardens will break down and leak like stone walls surrounding an ancient town. Techniques and technologies across the spectrum of knowledge will continue to spread, expand and link in new ways as they always have, bypassing temporary impediments, because that is the physical reality of information and knowledge.

The future Internet will inevitably follow these laws- becoming more open and flexible, using common protocols as enterprises and consumers demand greater flexibility. As an increasing number of data providers begin to implement Tim Bernier Lee’s Linked Data principles, it will transform into the creation of an open global Infosphere containing billions of links and coordinated by the World Wide Web Consortium.

This will offer a blueprint for connecting information from different sources into a single global data repository, with the Global Commons and Public Domain models playing an increasingly important democratic role.

Most importantly the Web will be equally available to and controlled by all nations, under the auspices of a specially constituted UN body, devolving forever away from US control.

But this can only happen if the underlying structural integrity of the Internet and Web is preserved. If managed as a global cooperative project it will result in enormous benefits for the whole of humanity. But if the Future Internet splits and fragments along geopolitical and competitive lines, as its current evolution suggests, then much of its potential benefit for our civilisation and planet will dissipate.

The next evolutionary phase of this pre-eminent human-engineered organism of the 21st century will be critical.

Tuesday, September 13, 2011

Future Enterprise- Cyberwars

The Fortune top 2000 companies as well as Governments across the world are under serious cyber attack and it is likely to get much worse.
Cybercrime is a generic term for the illegal incursion and disruption at the national, enterprise and community level, of both cyber and physical assets. Cyber assets include the key information and knowledge resources, including the data, policies, reports, IP, algorithms and applications, programs and operational procedures, that a modern society in the 21st century relies on to operate and manage its business.
Physical assets include an increasing number of everyday objects and services controlled by computers and increasingly connected to the Internet including- infrastructure, manufacturing and production machinery, industrial control and communication centres, security systems, medical devices, electricity grids and meters, vehicles and transport systems as well as billions of consumer and industrial electronic devices.
Cybercrime is a relatively new phenomenon but because of its recent scale and game-changing implications for both government and industry it is rapidly becoming the dominant risk theme of the 21st century.
The opportunity for cyber attacks grows daily as corporations and governments continue to amass information about individuals in complex networks across the Web and at the same time new generations of cyber activists, some motivated purely by money and others by the desire to expose and destabilise corporations and governments, continue to hack into organisational secrets.
No enterprise, no matter how small or benign, will be safe from attack in the future, with an estimated 250,000 site breaches reported in the last few years including- EMC's RSA Security unit, the Public Broadcaster PBS, Sony's PlayStation network, Apple administration password database, the International Monetary Fund, South Korea's largest banks, the Spanish Police, US Senate, Texas Police Department, the CIA, Turkish and Malaysian governments, Google's Gmail, the Nokia forum site and Citibank's Credit Card accounts.
In the latest Norton Cybercrime Report, it was reported that breaches of various types claimed 431 million adult victims last year, with 73% of adults in the US alone incurring estimated financial losses of $US140 billion. As a criminal activity, cyber incursion is now almost as lucrative as the illegal drug trade. The total cost last year, including lost productivity and direct cash losses resulting from cyber attacks associated with viruses, malware and identity theft is estimated at $US 388 billion.
The security firm McAfee report listed a range of cybercrime technologies deployed including- denial of service attacks, malware, spam, phishing, social site engineering, mobile phone viruses, botnets and phone sms Trojan messages. Also more recently, hacking drones- remote controlled aerial vehicles which can automatically detect and compromise wireless networks, by locating a weak spot in a corporate internet connection have been developed. To make matters worse, the first flaws in the advanced encryption standard used for internet banking and financial transactions as well as Government secure transmission, have been discovered.
But most worrying, security experts from McAfee have now discovered the biggest series of cyber attacks to date, involving infiltration of the networks of 72 organisations around the world including- the UN, the governments of the US, Taiwan, India, South Korea, Vietnam and Canada, ASEAN, the International Olympic committee and an array of companies from defence contractors to high-tech enterprises including Google- with most of the victims unaware of the breaches.
This represents a massive loss of economic advantage- possibly the biggest transfer of IP wealth in history. Currently every company in every industry of significant size, with valuable IP, contracts or trade secrets is potentially under attack and this will inevitably extend to smaller organisations such as strategic hi-tech start-ups in the future. At the national level it involves exposure of sensitive state secrets including- policy intentions and decisions covering all levels and functions of Government such as trade, defence and industry policy.
The stakes are huge; a challenge to economies and global markets. From both an enterprise and State perspective therefore this is an intolerable situation; but because it has exploded at such speed, the response to date has largely been fragmented and ineffective.
But this is about much more than ruthless criminal intent to pillage credit cards, steal trade data or bring down unpopular sites. On a global scale, cybercrime has the potential to morph into full blown Cyberwar!
The main players in this game of cat and mouse currently include three broad groups, each with different motivations, although overlapping to a degree.
First- the State sponsored hackers- China, Iran, Russia, Estonia, Israel- recently upping the cyberwar stakes with its Stuxnet attack on the nuclear facilities of Iran, Indonesia, North Korea and Syria. At the same time dictatorial regimes across the world, from Syria to Saudi Arabia have introduced extreme punitive measures to monitor and control access by dissidents, particularly during the Arab Spring. And they have often coerced US and European technology companies to assist them, including Siemens- in the cross-hairs for assisting the autocratic Government of Bahrain track down dissidents.
Second- the White hats- independent freelance hacker groups such as Anonymous/LulzSec. Their aim according to their manifesto is to expose the corruption and greed inherent in the play-books of big business and rogue regimes powered by hyper-capitalism and intent on plundering the natural resources of the planet. They also support whistle-blower groups such as WikiLeaks and social activist groups in general.
Third- the Black hats- with much more clearly defined goals, from overtly criminal to destructive and anarchistic. They are marshalling their attacks primarily on the Midas riches of credit card and financial databases across the globe, at the same time as China and Russia are hacking other Government’s IP, email and trade secrets.
Cyber Hackers now make up a complex substratum of social crime, composed of an ad hoc combination of hackers and security experts, each with a fiercely competitive agenda. But already fragmentation is extending to inter-cyber warfare between these rapidly evolving networks of dysfunctional society, at the same time overlapping with global terrorist groups.
The world's superpowers have already begun to introduce new cyber-policies to desperately protect their intellectual property, infrastructure and financial assets, as well control the flow of information within their populations- but is already bogged down.
The European Convention on Cybercrime is moving at glacial speed because EU governments are reluctant to share sovereign IT information with other powers, even if friendly. The new US Cyber Manifesto has also been stymied. The policy aims to support open access to the Internet while at the same time pursuing a policy of aggressive physical deterrence against any foreign powers such as China and Iran or organisations like WikiLeaks, which attempt to penetrate US computer systems. But this policy is meeting resistance from vested US business interests on issues of regulatory control and government surveillance of business system security.
China on the other hand appears to be going for the jugular. It has established The State Internet Information Office with the express purpose of regulating and controlling its vast Internet population and had even considered building an alternative Internet to sidestep the US controlled ICAAN.
Cybercrime may also be made a lot easier by the ubiquitous application of Cloud technology in the future. Most major corporations and government agencies will be using at least one Cloud to store and process its operational data, leased from Google, Cisco, IBM, Amazon, Microsoft, HP etc. Already several of these clouds including Amazon have been breached and others have had outages. Gaining access to data from a dozen major information sources would be a lot easier than penetrating thousands of individual databases.
Even though most Cloud installations had incorporated security software easily able to ward off rudimentary distributed denial-of-service and hacker attacks, future Cybergent technologies would be much more effective because of superior forensic intelligence.
So the race is on to co-opt the most advanced cyber technology both to gain advantage, but also for prevention. Present day cybercrime technologies however will appear largely primitive within the next few years. The emphasis will shift to the application of much more sophisticated Cyberagent software technology.
The first generation of software agents appeared in the nineties and was used to trawl the Web, applying basic search procedures to locate information resources such as online shopping or travel sites and locating the best prices.
The second generation emerged around five years later. These programs were smarter, incorporating artificial intelligence that enabled them to make decisions more autonomously to meet their operational goals. They were deployed mainly in simulations of interactive population behaviour and interaction in a variety of environments- shopping malls, supply chains as well as disaster and conflict areas. In addition, they possessed superior negotiation and decision logic skills, using Game theory and semantic inferencing techniques.
But the third generation agents will be something else again. These will be based on complementary combinations of advanced AI techniques such as- ‘evolutionary algorithms’, that allow them to constantly improve their skills; 'neural networks' for superior pattern recognition and learning; ‘bayesian logic’ for powerful inferencing capabililty; ‘ant foraging' to help find the most efficient paths through complex network environments and ‘swarm' technology, allowing individual agent intelligence to be amplified by working cooperatively in large groups.
They will increasingly also be capable of tapping into the enormous computational intelligence of the Web, including the public databases of mathematical and scientific algorithms, eventually allowing their intelligence to be amplified by a factor of a hundredfold over previous agent capabilities.
Such agent swarms will also be equipped behaviourally and cognitively to focus on their missions with laser or Zen-like concentration, to the exclusion of everything else, until they have chased down their quarry; whether corporate strategic plans, government covert secrets or nuclear missile blueprints.
This Uber-level of intelligence will transform Agent swarms into formidable cyber strike forces, which could operate under deep cover or in sleeper mode, transforming into harmless chunks of code until a cell and attack was activated and could also replicate rapidly if additional forces were required.
Although this might sound like science fiction, the AI techniques involved, such as evolutionary algorithms, neural networks and swarm architectures have been in common use in business and industry for over ten years. The capacity to harness them in cyber strike force mode is only a matter of time.
But all parties now beginning to understand that the nature of conflict and the balance of world power is shifting with lightning speed, obsoleting overnight the nature of war and traditional economic dominance in a globalised cyber-world. Future conflicts will not be about destroying an enemy armed with billion dollar hi-tech armaments such as tanks, jets and warships, but will be played out largely in future cyberspace.
What value a sophisticated weapons system if it can be disabled by an elite cyber hacker with a Stuxnet-type virus?
What value armies of highly trained soldiers if their command and control centres can be disabled with a few keyboard strokes and a swarm of smart software agents?
What value the trillions of dollars spent on containing Al-Qaeda if the economic and logistical systems supporting the attack can be thrown into disarray by a powerful artificial intelligence algorithm?
But the CEOs of major corporations and military commanders of the major powers are still coming to terms with the mind-blowing ramifications of Cyberwar. Not only would their systems soon be obsolete but so would their command structures.
Adding to the pressure is the impact of global warming and the overuse of the planet’s finite natural resources. Cyberwars are more likely to flourish in times of food and critical resource shortages, with countries and enterprises desperate for inside knowledge to secure access to critical supply information. That time is not far off, with estimates of critical food shortages and rising prices as early as 2013, with a follow on spike in global conflict highly likely.
One thing is certain. From now on Cyberspace will be the new corporate and state battleground and Cybercrime the main risk protagonist.
The threat of all out Cyber war is now an urgent issue that transcends lines between individual enterprises or governments. Unless a global cyber security framework, binding both the private and public sectors can be engineered, a world of disorder will rapidly emerge - a turbulent world, where change has ceased to be beneficial and becomes ultimately destructive.

Friday, July 1, 2011

Future Enterprise- The Knowledge Universe

David Hunter Tow- Director of the Future Enterprise Research Centre contends that the dynamics and evolution of the Knowledge Universe are governed by the laws of physics just as the objects in our physical galaxy and universe.

Our Milky Way is a large barred spiral arm galaxy approximately 100,000 light years across in which we have a pantheon of amazing cosmic objects including- at least 200 billion suns and double that number of planets- some just like earth, black holes- including a massive one at its centre equal in power to 4 million suns, numerous dying or dead stars- burnt out white and brown dwarfs neutron stars and remnants of supernovas, trillions of asteroids and meteorites, and vast clouds of hydrogen gas and other molecules giving birth to new stars.

The dynamic links between these galactic entities are primarily a function of the all-pervasive force of gravity, which warps spacetime, creating black holes and initiating the birth and death of stars.

This incredible menagerie does not function as separate objects therefore, but constitutes a gigantic and complex network in a constant state of evolution, emitting radiation from the longest microwave and infrared to the shortest and most energetic x-ray and gamma wavelengths. In turn it is influenced by the other 100-200 billion galaxies that exist in our universe, which in turn may be influenced by other universes or causal patches in a multiverse.

And our small planet, harbouring perhaps the most advanced life form in the universe, is directly or indirectly influenced by all of them.

Our planet’s emerging Knowledge Universe is analogous to this gigantic network of linked galactic objects; a boundless array of information and knowledge objects connected within the networks of the Internet and Web and controlled by its own physical laws.

Information and knowledge objects evolve in a similar way to stars, planets and black holes by adapting to the laws of physics and information within their environments.

They may be loosely classified in terms of a dozen major categories including-

Knowledge Repositories- databases, data warehouses, data centres and modern-day Clouds; Knowledge Processors and Generators- including a vast array of enterprises, web and social sites, specialist software developers as well as community, social, cultural and scientific groups and institutions. These utilise a range of powerful computing devices increasingly linked to the Internet as well as human minds, interconnected via the Web in the form of a powerful computational intelligence.

In addition there exist a plethora of Knowledge Aggregators, Interpreters and Distributors- news feed publishers in both printed and electronic forms, modern day encyclopedia creators such as Wikipedia, and compilers of mathematical, biological, environmental, economic, financial and demographic statistics; utilising networks of all types- wired and wireless, channelling knowledge between and within objects across the Web.

These and many other knowledge object classes and sub-classes constitute a vast network of networks, constantly combining and morphing in unlimited combinations.

A Cloud for example not only stores information, but may process and transmit it as a service. Likewise a social or gaming network may function as a utility, applying database technology such as SQL and many other software tools; but also may manage its knowledge by storing member details and applications via an internal or external Cloud, distributing services via mobile media devices to its members, advertisers and other processing agents. In turn ubiquitous mobile devices - smart phones and tablets, increasingly perform heavy duty processing and provide significant internal storage as well as wireless transmission connected to other networks.

All these objects have a role to play in the knowledge universe menagerie. And in doing so they’re involved in an evolutionary dance of cosmic proportions. But the thing is, this dance is never going to stop and is accelerating in both volume and complexity.

It is estimated that by 2015 the amount of information will quadruple, generated by vast volumes of video transmission as well as countless new applications from the business, social and science research worlds- measured in petabytes.

Knowledge objects are also similar to and interwoven with the cosmic physical forces of the galaxy as they are born, grow, merge, morph, split, regenerate and die, based on the adaptive pressures of their environments. And more and more end up residing in the free public domain.

The evolution of each object is therefore a function of all other knowledge objects in its galaxy, following its own information laws controlled by physical principles and constraints. These include for example the Laws of thermodynamics and entropy, which define the limits of computation and the conversion of data into knowledge. And Shannon’s Laws, which set limits on information channel capacity and transmission.

The laws of physics also includes those governing information and knowledge flows such as the Action Principle – which defines the shortest and least energy intensive path between objects.

The Least Action Principle postulates that any dynamical process, whether the trajectory of a light ray or orbit of a planet, follows a path of least resistance or one which minimises the 'action' or overall energy expended.

Physicist Richard Feynman showed that quantum theory also incorporates a version of the Action Principle and underlies a vast range of processes from physics to linguistics, communication and biology. The evidence suggests a deep connection between this principle based on energy minimisation and self-organising systems including light waves, information flows and natural system topographies, such as the flow of a river.

Information and knowledge is now flowing seamlessly to every corner of the planet and its populations, mediated by the Internet and Web, reaching even the poorest communities in developing countries via cheap PCs, wireless phones and an increasing variety of other mobile devices.

Trying to block or bypass this flow is a pointless exercise and a sure way to hasten an enterprise’s demise. Essential knowledge may be temporarily blocked for example by patents, which protect IP but in 80% of cases are not applied, but used by large enterprises as a competitive blocking strategy. In the process this may deprive poorer populations of essential products such as life-saving drugs.

But regardless, eventually patents run out or are obsoleted by more advanced technologies. This is happening at an increasing rate in all fields - graphene-based electronics, superconducting materials, genetic-based therapies, green technologies, AI and quantum based computing methods.

Enterprise walled gardens therefore eventually break down or leak like a stone wall surrounding an ancient town, as the technology’s lifetime expires and new developments, opportunities and entrepreneurs emerge. Techniques and technologies across the spectrum of knowledge will continue to spread, expand and link in new ways as they always have, bypassing temporary impediments, because that is the physical reality of information and knowledge.

There are many examples of the recent spread and linking of knowledge objects in galactic orbits within the Knowledge universe including-

The Education Galaxy-

The transfer of knowledge is the basis of the education process and is now providing a global flow of free educational material and resources online, including open access courseware. Free courseware is already offered by a number of prestigious tertiary institutions including- The Massachusetts Institute of Technology, Yale and Harvard, as well as free knowledge reference sites such as Wikipedia. And this will accelerate, becoming pervasive in the near future; making it much cheaper and easier for educational resources to reach previously illiterate societies and communities, instead of being monopolised by traditional institutions such as Universities, particularly as a generational shift takes place.

The Knowledge Universe driven by The Action Principle will by 2040 finally allow the developing world to achieve equal status with the developed world in terms of access to knowledge, training and the realisation of human potential.

The Social Galaxy-

It is predicted there will be thousands of social networks within the Knowledge Universe over the next twenty years apart from the scores that exist today such as Facebook, Linkedin, Google Plus, Badoo, Ning, Academia, Craigslist, Foursquare, Plaxo, Yelp, WiserEarth, Meetup, Mebo, Friendster etc, each catering to the needs of specialised groups.

In the near future these will be seamlessly connected by new applications such as Diaspora, avoiding the walled garden effect and allowing individuals to roam at will across the social universe unimpeded.

The Media Galaxy-

In the Media arena the die has been cast. The older print companies are desperately trying to reposition to face of the online revolution. But by 2015 most print media will be forced to radically adapt towards an online multimedia model. Newspapers are already in turmoil with advertising revenues collapsing as traditional classified streams dry up due to online competition.

Traditional news, both local and global, is rapidly being reduced to a stream of headlines with minimal analysis. Special editions and feature articles will continue in reduced quantity, but online short-burst information- text, video and audio streams will become increasingly popular, distributed via multimedia platforms such as the new generation smart phones, tablets and eBooks, already in common use.

By 2020- traditional free to air television channels will also have largely disappeared, along with many cable channels, with television advertising similarly caught in the headlight glare of change. The switch will be to web channels covering every topic- personalised to individual taste- viewable anywhere, anytime and watched primarily on mobile media screens. The personalised channel will be ubiquitous, with news, information, music and video filtered and customised to suit every personal taste.

All print media including magazines and books will also have followed newspapers to a multimedia model distributed over the Web for flexible viewing. The same already applies to music and video. The power of traditional publishers and creative gatekeepers is now being challenged as online stores such as Amazon, Apple and Google and many smaller companies allow any author, song writer or video producer to self-publish globally and cheaply.

The Cloud Galaxy-

The Cloud is a metaphor for shared infrastructure, software and data storage within the web.

Clouds already support a large range of knowledge environments including- social, cultural, business, energy, financial, office, retail, manufacturing, supply chain, booking, engineering, gaming, music, photo, video, media, communications and scientific applications.

Most of the major service and software providers including IBM, EDS, Apple, Google, Amazon, Yahoo, Microsoft and e-Bay still adopt a walled garden approach, providing access to proprietary databases through proprietary Web Application Programming Interfaces-APIs.

APIs, rely on different ID and access mechanisms as well as data in specific formats for example to support music, video, particle collider and human genome information. Therefore APIs have tended to slice the web into separate sources and silos, restricting its full potential.

However In the future Clouds will become more generic and open using common protocols as enterprises demand greater flexibility. But the next evolutionary phase will offer much more- in particular Data Linking. This will promote the sharing of datasets across diverse domains and between business, research and group partners, bringing the full semantic power of the Web into play and changing the face of business forever.

Tim Berniers Lee’s recent publication of Linked Data Principles for connecting structured data on the web, provides a future blueprint for connecting information from different sources into a single global data repository; accessible by generic data browsers and standard database and query languages. An increasing number of data providers have now begun to implement these Linked Data principles, leading to the creation of an open global data space containing billions of links and coordinated by the World Wide Web Consortium.

And so the trendlines are now becoming clear. The Web is advancing as a multi-dimensional medium for the discovery, generation and linking of knowledge in all its forms, leveraging semantic and artificial intelligence. Individual supplier services will obviously continue to multiply, but enterprises will increasingly demand access to open source data clouds as well as most utility services.

Cloud spaces will continue to blend and split, fragment and reform in unlimited combinations and permutations. They will share data as media organisations already do amongst themselves and with countless news aggregators. The divide lines between public and private ownership of application IP will also become fuzzy, with most applications and algorithms over time converting to generic forms- as many critical software tools such as Linux, Java and SQL.

The Global Commons and Public Domain models therefore will play an increasingly important role. They represent a free sharing knowledge marketplace accessible for the global benefit, where everyone wins as value-added services proliferate. Alternate knowledge and social hubs such as the thousands of Wikipedia lookalikes, controlled by consumer groups, will start to compete with and displace the power of the media and Uber-web enterprises such as Google, which will be forced to cede part of its global knowledge control in its own survival self-interest.

The Web will be controlled by all nations via the global commons in conjunction with a specially constituted body such as the present ICAAN, devolving away from US control.

Many companies have tried to go against the evolutionary flow in the past and paid the price – including GM and Ford which continued to produce large gas-guzzling vehicles. They survived the low carbon/electric vehicle revolution only because of taxpayer largesse.

IBM was another that attempted to force the market to accept its large mainframes- against the trend towards small desktop computers and later the internet. IBM almost died but recovered just in time by embracing software and services, and now leveraging its Smart Planet Strategy.

Microsoft has until recently continued to promote desktop computing against the trend to internet and mobile computing and has been caught flat footed. It may survive as it belatedly adapts its office software to the Internet, but not in its previous dominant position.

Nokia was king of mobile phones but failed to see the shift to smarter phones and applications. It has now been forced to merge to survive, with a low likelihood of ever returning to its glory days.

Oracle, Apple and Facebook are busy building walled gardens. Although looking dominant today their longer term survival will also be in jeopardy if they continue their retro strategy against the flow.

The latest 'Smart Planet' paradigm, in which the infrastructure and processes of the planet- whether manufacturing, supply chains, electricity grids, water pipelines or traffic flows, are being re-engineered to optimise performance and achieve greener, more sustainable outcomes, will be the major driver for the enterprise of the future.

The Smart Planet will also demand that decisions be made more rigorously, efficiently, adaptively and therefore largely autonomously, within a radically new networked architecture.

This will be a major disruptive paradigm for many traditional IT companies which will be forced to redesign their applications and services from the ground up. Those that are too slow will be overtaken by the new generation of nimble system developers, not weighed down by legacy systems. The larger software enterprises in particular will struggle to keep up with the constant flow of knowledge and innovation required to survive, after comfortably dominating their market segment for years, as the cycles of change get shorter and shorter.

The flow of information and knowledge according to physical principles will continue at an accelerating rate, but still many companies will try to continue to swim against the flow to their eventual cost.

Within two decades today’s Internet and Web itself will have split into many alternate distributed but connected network descendants, eventually criss-crossing the knowledge universe and supporting autonomously managed worlds with different processing efficiency and reliability requirements.

Software and system developers and suppliers will need to differentiate their products increasingly as focussed value-added services, targeted to specific enterprises and industries. Service applications will therefore be differentiated primarily by the level of value they contribute to the enterprise- not their generic capability.

Enterprises in turn will need to be very agile, not only because of the exponential rise in the diversity and volume of knowledge, but also its potential for interweaving and creating opportunities in countless applications. They will therefore need to keep acutely tuned to the signals from their environment to survive.

As the Knowledge Universe expands and complexifies as a network of networks, with the spread of information and knowledge according to the laws of physics, enterprises will have only one avenue of escape. That is to continually innovate to generate new knowledge in the form of new products and services before the next wave of science and technology innovation overtakes them; just as electric cars, digital photography and smart phones have already obliterated whole sectors of industry in the blink of an eye.

No enterprise can escape this remorseless race. Better to join it rather than putting up a wall which will inevitably crumble.

They will need to run very hard just to survive- just like the Red Queen.

Monday, April 11, 2011

Future Enterprise- The Big Picture

David Hunter Tow- Director of the Future Enterprise Research Centre argues that seeing the big picture is essential for the survival of the future enterprise.
Seeing the Big Picture- relating the enterprise’s role within its physical and social environment, will become increasingly vital for its survival in the future. It will not be sufficient to plan one, two or five years ahead. Although near-term planning is essential, understanding the big shifts likely to impact our planet and future civilization, will be essential inputs to creative planning, adaptive agility and risk avoidance.
Take Google for example. Many of its acquisitions such as YouTube and Maps were made with the longer term potential in mind rather than short term profits. These were strategic targets that fitted with Google’s general philosophy and could mesh with its long term goals. It was understood that eventually there was a high likelihood of a major payoff and therefore immediate profitability from these acquisitions were not a priority.

Accurately predicting the longer term future has always been seen as problematic, since the Delphi Oracle was shown to have made her forecasts under the influence of laughing gas from an underground aquifer. So there’s been an assumption that’s it’s an impossible mission and why bother as long as the next three to five year profit forecasts are on track.

Most forecasting textbooks traditionally list a number of well-developed techniques based on- time series projections, regression analysis, Delphi and scenario expert options, artificial neural networks and simulation modelling. But these have usually failed miserably to predict the future in times of abrupt change within the broader physical, social or economic environment; such as recent extreme disasters, the global financial crisis or the Arab democratic revolution.

In fact enterprises- even the biggest, have a poor history in seeing the big picture. For example, IBM, didn’t see the looming shift to personal computers and let slip one of the most strategic opportunities in modern times; handing operating software- DOS, ideal for managing desktop computers, to a small startup- Microsoft. And it almost repeated this failure with the advent of the Internet, ignoring its potential until it realised everyone else had embraced the ability to go online to the world. Only its enormous base of mainframe systems saved IBM from oblivion.
And in more recent times there was Ford and GM. Both went virtually bankrupt and had to be bailed out by the US Government because they would not or could not see the obvious shift in consumer sentiment to smaller cars with lower fuel usage. And then there was Lehman Brothers and Citibank and Fanny Mae which also thought they were invincible and too big to fail.
And the list goes on and on. So what’s the problem?

In all the above cases, enterprise management ignored the signals coming loud and clear from their environments via consumers and customers, through a combination of ignorance and arrogance. In the meantime other more agile companies such as Microsoft and Toyota picked up the signals and exploited the opportunities. But then Microsoft almost lost the plot to Google by not seeing the emerging power of the Internet as the dominant driver of information in today’s society.

In other words, the problem is that many companies, particularly those that are dominant in their industry sectors, begin to believe their own rhetoric; that they can manipulate the market according to their whims and wishes, with consumers eventually falling into line, perhaps with an extra push from of a persuasive enough advertising campaign.
This may work in the short term, but if they continue to fail to adapt and evolve, going against the flow and focussing only on their past history through a prism that becomes increasingly self-reflective, such organisations will eventually lose sight of the big picture and reality. This is despite often employing hundreds of strategic analysts, planners, marketing gurus and forecasters, as well as deploying the most advanced computing systems on the planet.

The bigger the enterprise therefore, the more likely it is to live in a bubble of its own making, believing its own internally generated myths. So despite the use of the latest business intelligence software busily scavenging for patterns generated from past customer and financial data, standard industry forecasts and the odd focus group, the analysis will be virtually useless as guide to an uncertain future and as an adaptation tool in the face of looming disasters.

And inevitably without being aware of the bigger but often more subtle shifts in their global environment, such enterprises eventually end up on the edge of a financial precipice without a safety net.

But still many of the latest forecasting trends reinforce this suicidal behaviour by extrapolating trends or variables from past datasets or building scenarios based on narrow parameters.
To understand the future therefore, it will be essential for an enterprise to also understand and be aware of reality at a far deeper level, beyond traditional business boundaries. Many of the most seemingly complex patterns of reality and life are derived from simple rules, based on the science of fractals, chaos, networks, quantum theory, computation and evolution.

Our increased understanding of simple structures such as the human genome allows us to gain exquisite insight into the enormous complexity of life and the cause of many diseases; while understanding chaos and network theory allows us to better manage ecosystems and improve our prediction of disasters- both natural and man-made.

So being able to see the bigger picture and understand its ramifications is vital for the survival of the future enterprise. But how does a system traditionally steeped in conventional narrowly focussed management techniques change its mindset? The major social and physical drivers are not always as obvious as global warming, globalisation or cyber-revolutions.
One critical part of the process involves integrating disparate, often unrelated sources of information and trends across multiple domains of knowledge and expertise. This goes well beyond traditional business intelligence and analytic techniques and comes under the new category of Macroscopic analysis.

The sciences are increasingly using the lens of the macroscope in innovative ways to support collaborative research and gain big picture perspectives in disciplines such as biology, cosmology, ecology and quantum physics. Macroscopes are flexibly updatable combinations or bundles of cyber infrastructure software, algorithms, web services, computing resources and toolkit plug-ins supporting computational analysis and workflows, capable of facilitating the synthesis of vast amounts of research information from thousands of databases around the world. They perform meta-analyses to discover relevant patterns and make predictions to solve critical puzzles such as the genetic causes of cancer and the nature of dark matter. And then going one step further they combine interdisciplinary trends across for example, astronomy and biology, to create new domains such as astrobiology, to determine the likelihood of other life forms existing within the universe.

Now business is also realising the value of applying the benefits of macroscopes to better see the big picture and avoid disaster.
Business macroscopes will provide a much more holistic view of complex information and knowledge sets, detecting significant risks and trends from multiple, often unrelated sources as in the sciences; derived not just from historical enterprise transactions and analyses, but from disciplines that have never before entered the organisation’s lexicon, such as climate change and social networks.

In other words, macroscopes are like giant biological filter feeders, such as whales. Vast flows of water containing micro-organisms and detritus are constantly pumped through the animal’s filtering system and assessed for their value; with only the residues necessary for the animal’s energy and survival retained. It is a largely an automatic process and so it will be for the enterprise once the architecture and parameters have been established. Instead of water, vast flows of complex information and events will be analysed to determine those fluxes most relevant to the organisation’s well-being and survival.

Implementing the macroscope in business will involve integrating it into the fabric of the future enterprise and its IT support systems. This will be a daunting task, but the templates have already been established in the form of the rigorously tested service-oriented architectures such as the Open Services Gateway initiative and Cyber Infrastructure Shell - OSGi/CIShell, which support the interoperability of applications and services by allowing dynamic plug and play integration of independent web service, algorithm and tool components.

Science used to lag the business community in its use of standard tools and innovative computational practice. Now it’s the reverse. Enterprises need to adopt the insight and rigour that science has had to apply to meet the high standards of proof required by society, based on the scientific method.

So now science and business are partners- in lockstep, applying the same computational methods and intelligence to secure their futures and never losing sight of the big picture.

Sunday, January 16, 2011

Future Enterprise- The Future of Work

Director of the Future Enterprise Research Centre-David Hunter Tow, forecasts that within the next two decades, the future architecture guiding the enterprise will dramatically alter traditional work patterns.

By 2020 the traditional notion of an individual's job and work-related role will be recognised as outdated, increasingly mismatched with the fluid requirements of the 21st century. Future productivity outputs will be measured in terms of flexible value-added criteria and contribution to the goals of the organisation linked to social utility, rather than in terms of hours worked on a specific project.

The traditional office will also become redundant as the wireless web expands, allowing information workers- fifty percent of the workforce, to operate from home or local social hubs such as coffee bars, as already occurring- (Ref Future Cities). All such centres will be linked seamlessly via the Internet's multimedia Wireless Grid/Mesh Utility supporting Web and Cloud Infrastructure. This will also enable enormous time and energy savings for workers and the planet in general, having a beneficial impact on the quality of life for millions.

Most tasks, even in the traditional labour-intensive sectors of health, construction, manufacturing and transport will be largely automated or robot-assisted. Projects will be managed and resourced on a real-time basis within the Web's global knowledge network- (Ref Future Web).

Boundaries will then blur between traditional full-time, part-time, contract and volunteering modes of employment as well as between worker and management roles. Most workers will share time between their own creative projects and enterprise applications as already happening, with creativity and innovation recognised as critical work competitive inputs.

Tomorrow's enterprise will be most effectively represented as a decision network model with decisions as nodes and information flows linking the relationships between them. This model offers an extremely powerful mechanism for understanding and optimising the enterprise of the 21st century- extending far beyond current non-adaptive process models.

The enterprise ecosystem’s organisational boundaries and work practices will therefore become increasingly fluid and porous, in synch with the new adaptive network flow architectures. Individuals will move freely between projects, career paths and virtual organisations within the ecosystem; adding value to each enterprise and in turn continuously acquiring new skills, linked to ongoing advanced learning programs. Work patterns will therefore gradually adapt to a model of seamless knowledge flows, generated both by human and web-based algorithms.

The semantic distinctions between workers and management will also disappear with robots performing a large proportion of operational roles without human supervision. The role of unions in the workplace will then have morphed to providing largely advisory, research and cooperative support services.

Concurrently with the above scenarios will be a recognition that the philosophy and architecture of the enterprise of the future will require a major focus on surviving in an increasingly complex environment; requiring the capacity to optimise operations and strategies in shorter and shorter timeframes within a fast changing global cultural, economic, physical and technological environment.

To achieve this goal, artificial and human intelligence will need to merge at both the strategic and operational levels, driven by a need to implement decision-making autonomously with minimal human intervention, as is already occurring in advanced communication and control systems. The genesis of this trend is also becoming apparent in current service-oriented applications including- procurement and supply, resource and financial management and health and lifestyle services, where capitalising on short-term windows of opportunity is paramount.


By 2040, work will relate primarily to the generation of new knowledge and services, by combining human, robot and web intelligence to maximum potential. Most processes will be fully automated both at the operational and strategic level within the context of the Intelligent enterprise. New products and services will be generated from concept to design to production within months, days or hours. Individual creativity and skills will remain in high demand but will increasingly be amplified and modulated within the context of the Web's cooperative decision-making and intelligence capacity.

The survival and success of the enterprise will therefore be contingent on its embedding within the broader cultural environment and norms of the larger community. Business will become an integral component of community culture, with its governance reflecting ethical and sustainable global standards. There will also emerge much greater cooperation rather than competition between enterprises, as globalisation and global warming become the dominant socio-economic drivers.
The days of separating commercial decisions from their social impact will be over.

By 2050, the larger enterprise will evolve as a semi self-organising entity within a larger ecosystem, operating in largely autonomous mode. New knowledge will constantly add value to its evolution, generated through organisational decision processes and knowledge network flows.

The Future Enterprise ecosystem will therefore morph, merge and dissemble in a seamless and endless cycle, generating new processes, knowledge and services to support the global community.

Welcome to a brave new world.

Future Enterprise- The Smart Business case

The Director of the Future Enterprise Research Centre- David Hunter Tow, argues the case for a complete reappraisal of the role of the Business Case and the validity of its current methodology.

There is an endemic structural weakness in today’s business case methodology, which is particularly problematic for Information Technology projects. It arises primarily because of the inability of most enterprises to adequately quantify the benefits relating to investment in new services and technologies.

Since the seventies, business and IT management have been stuck in a mindset which hasn’t changed from the time it became obvious that computer hardware and software was continuing to soak up large amounts of an organization’s capital expenditure budget.

And because of the increasing investment required to computerize the operations of a an organization, it occurred to management that it would be a good idea to offer a business case to justify its introduction. From that point to the present day, the mythology relating to measuring the indirect benefits of this expenditure has grown.
.
At the beginning most justification was comparatively easy. The case for computerizing the early banking, insurance, manufacturing and retail industries could be easily made, by comparing FTE cost savings from redundant staff with the cost of the computer hardware and software and the much smaller number of operations personnel required.

But then came the next generation of computers- client/server distributed systems, networked technologies, real-time operating environments and software that hid the real cost of regular maintenance, customization and upgrades. So it got harder to justify such systems on a cost savings basis alone, once the original legacy back-office savings had been made.

But everyone knew there were major additional benefits associated with up-to-date information and reporting, faster turnaround of accounts, better customer service and improved management decision-making. And from a government perspective, there would be public benefits as well as the quality of service delivery improved.

But how to translate these other ‘soft’, ‘indirect’, ‘intangible’ benefits, which were obvious to everyone, but apparently fiendishly difficult to pin down, into hard cold cash; that could realistically be factored into the ROI.

And then there emerged a rationalization to solve the problem- a dichotomy. The direct ‘tangible benefits’- those offering obvious direct cost savings, like reducing staff or inventory, were the ones that traditional bookkeepers could quantify and management felt comfortable with.
The indirect ‘intangible benefits’- the fuzzy ones, which of course by now were much bigger than the ‘direct benefits’ and could actually justify a major investment, would remain as best estimates. No-one in their right mind would actually attempt to calculate the value derived from improvements in strategic decision-making or customer satisfaction- and then put their signature to it- would they?..

So gradually the mythology of the intangible, incalculable benefit became embedded in the enterprise psyche.

Managers loved it because they could promote their favorite projects without having to seriously justify them. CIOs loved it because any problems relating to the failure of an application to deliver its promised benefits couldn’t be sheeted home to them. Suppliers loved it because that could maximize their sales of the next big thing; sometimes even writing the business case. And if anyone was silly enough to question their integrity, they could check with the other industry lemmings who had invested in the same magic bullet based on a watertight business case and who would never admit to a competitor they had made a monumental investment error.

And lastly, the high priced guru consultancy firms loved it because it was easy to charge an astronomical fee for a complex business case without actually proving the real payoff; and they couldn’t be blamed if the investment turned out to be a dud, because everyone including the CEO had signed off on it. And everyone knew it was impossible to quantify intangibles anyway.

And so the myth of intangible benefits grew. And as more and more technological advances emerged- the internet, software as a service, content integration, virtualization etc, the percentage of hard tangible benefits that could be offset against costs shrank to 20%, then 10%, then 5%, then zero and then wandered off into negative territory.
And not only that, the business case now had to include sustainability and green benefits, many of which also were ‘intangible’.

So lots of sophisticated ‘guestimates’ and fudging with a nod and a wink became the norm and everyone jumped on the bandwagon, from senior management with MBA credentials to junior accountants; all began to succumb to the glib rhetoric, the blind leading the blind.

And this is in an era when the other sciences were going gang-busters- sending orbiters to Mars, decoding the genome, using stem cells to replace organs and AI to smarten the planet’s infrastructure. But of course it was still far too hard and inconvenient to nail the simple science behind quantifying indirect IT benefits.

So to bolster the myth further, the IT business case template was born- a very authoritative document. Just fill in the blanks and let the creative accountants do the rest.
‘What’s you’re best estimate of the benefits realizable from a Business Intelligence, Supply Chain, Marketing or HR system as well as all the other stuff needed to support it; like a new service-oriented architecture, broadband communications network, data warehouses, security software, cloud technology etc.

Well- just pick a number.

But by mid 2000s the fragile house of cards was starting to wobble. The effect of all this ultra-sloppy, lazy accounting was starting to ripple through the enterprise, ending up in the bottom line. Project prioritization, long term planning, essential infrastructure upgrades, all were being distorted- skewed towards projects with short term easy-to-compute benefits, but little else. But now the big-ticket projects, essential to cope with a new world of realtime transaction processing, online sales and automated supply and distribution wouldn’t wait.

Rigorous, realistic intangible benefits analysis is essential to confirm the payoff from these systems- process reengineering to re-energize the organization, improved customer service and pricing to maximize economic value, optimised decision support to leverage knowledge assets and smart infrastructure upgrades to minimize unforeseen disasters.

But on the other side of the universe the environmental and health industries had grasped the nettle thirty years previously and basically solved the problem.

What is the value of a new heart drug? It’s the percentage of lives saved or extended when compared with the old ‘legacy’ or non-existent heart drug. A 10% improvement in lives saved or extended can easily be translated into a tangible increase in productive working hours as well as reduced health care costs. So the reduction in the risk of heart patients dying early becomes the quantifiable benefit and any side effects becomes a cost.

Same with the environment. What are the benefits from the genetic engineering of crops or saving a wetland. If the new genes reduce the potential for disease, then the reduction of risk of crop losses becomes a calculable benefit. If they cause the spread of resistant weeds or insects or can’t handle droughts- then that’s a cost.
If remediating fish spawning wetlands reduces the risk of fish extinction then that’s quantifiable benefit. If it reduces the ability of developers to build more flood prone houses then that’s a public benefit too.

Now back to IT. You say that’s fine for industries like Healthcare and the Environment, where the risks and benefits are obvious. But you can’t translate that approach to trickier stuff like the impact of IT on customer service or management decision-making.

Yes you can!!

The smarter corporate strategists and operations research groups including this Centre have been developing and applying techniques for over twenty years that successfully challenge the ‘intangible benefits’ myth.

They have combined risk theory with decision theory, tweaked it with some additional AI and come up with better enterprise planning, value modeling, system prioritization, evaluation and audit, and service optimization on a continuous basis. The results- a much healthier, profitable and more resilient enterprise.

And this is only the beginning for the future of the dynamic smart business case.

In the 21st century it will be integrated with a host of other new and more science-based planning techniques- risk analysis, forecasting, Bayesian probability networks and AI-based process optimisation algorithms; as the enterprise of the future positions itself to be a largely autonomous entity able to better react, seek new opportunities and re-create itself in a fast-changing and uncertain world.

The smart business case of the future therefore should not be seen as a standalone tool, but as a dynamic and integral part of enterprise planning and modelling. Unless it is applied rigorously, it can distort the whole fabric of the organisation.

Projects and services and products don’t end abruptly. They get absorbed into the fabric of the enterprise as they interweave with other processes, often emerging as part of a new technology or service. The smart business case should therefore be an evolving process also, constantly adjusting to the evolving nature of the enterprise.

It’s therefore high time that the whole crumbling edifice of the mythology of intangible benefits was put to rest and the business case became a lot smarter.

After all- you can’t have a smart enterprise or a smart planet without support from a smart business case.

And it is the 21st century.





.