Tuesday, September 13, 2011

Future Enterprise- Cyberwars

The Fortune top 2000 companies as well as Governments across the world are under serious cyber attack and it is likely to get much worse.
Cybercrime is a generic term for the illegal incursion and disruption at the national, enterprise and community level, of both cyber and physical assets. Cyber assets include the key information and knowledge resources, including the data, policies, reports, IP, algorithms and applications, programs and operational procedures, that a modern society in the 21st century relies on to operate and manage its business.
Physical assets include an increasing number of everyday objects and services controlled by computers and increasingly connected to the Internet including- infrastructure, manufacturing and production machinery, industrial control and communication centres, security systems, medical devices, electricity grids and meters, vehicles and transport systems as well as billions of consumer and industrial electronic devices.
Cybercrime is a relatively new phenomenon but because of its recent scale and game-changing implications for both government and industry it is rapidly becoming the dominant risk theme of the 21st century.
The opportunity for cyber attacks grows daily as corporations and governments continue to amass information about individuals in complex networks across the Web and at the same time new generations of cyber activists, some motivated purely by money and others by the desire to expose and destabilise corporations and governments, continue to hack into organisational secrets.
No enterprise, no matter how small or benign, will be safe from attack in the future, with an estimated 250,000 site breaches reported in the last few years including- EMC's RSA Security unit, the Public Broadcaster PBS, Sony's PlayStation network, Apple administration password database, the International Monetary Fund, South Korea's largest banks, the Spanish Police, US Senate, Texas Police Department, the CIA, Turkish and Malaysian governments, Google's Gmail, the Nokia forum site and Citibank's Credit Card accounts.
In the latest Norton Cybercrime Report, it was reported that breaches of various types claimed 431 million adult victims last year, with 73% of adults in the US alone incurring estimated financial losses of $US140 billion. As a criminal activity, cyber incursion is now almost as lucrative as the illegal drug trade. The total cost last year, including lost productivity and direct cash losses resulting from cyber attacks associated with viruses, malware and identity theft is estimated at $US 388 billion.
The security firm McAfee report listed a range of cybercrime technologies deployed including- denial of service attacks, malware, spam, phishing, social site engineering, mobile phone viruses, botnets and phone sms Trojan messages. Also more recently, hacking drones- remote controlled aerial vehicles which can automatically detect and compromise wireless networks, by locating a weak spot in a corporate internet connection have been developed. To make matters worse, the first flaws in the advanced encryption standard used for internet banking and financial transactions as well as Government secure transmission, have been discovered.
But most worrying, security experts from McAfee have now discovered the biggest series of cyber attacks to date, involving infiltration of the networks of 72 organisations around the world including- the UN, the governments of the US, Taiwan, India, South Korea, Vietnam and Canada, ASEAN, the International Olympic committee and an array of companies from defence contractors to high-tech enterprises including Google- with most of the victims unaware of the breaches.
This represents a massive loss of economic advantage- possibly the biggest transfer of IP wealth in history. Currently every company in every industry of significant size, with valuable IP, contracts or trade secrets is potentially under attack and this will inevitably extend to smaller organisations such as strategic hi-tech start-ups in the future. At the national level it involves exposure of sensitive state secrets including- policy intentions and decisions covering all levels and functions of Government such as trade, defence and industry policy.
The stakes are huge; a challenge to economies and global markets. From both an enterprise and State perspective therefore this is an intolerable situation; but because it has exploded at such speed, the response to date has largely been fragmented and ineffective.
But this is about much more than ruthless criminal intent to pillage credit cards, steal trade data or bring down unpopular sites. On a global scale, cybercrime has the potential to morph into full blown Cyberwar!
The main players in this game of cat and mouse currently include three broad groups, each with different motivations, although overlapping to a degree.
First- the State sponsored hackers- China, Iran, Russia, Estonia, Israel- recently upping the cyberwar stakes with its Stuxnet attack on the nuclear facilities of Iran, Indonesia, North Korea and Syria. At the same time dictatorial regimes across the world, from Syria to Saudi Arabia have introduced extreme punitive measures to monitor and control access by dissidents, particularly during the Arab Spring. And they have often coerced US and European technology companies to assist them, including Siemens- in the cross-hairs for assisting the autocratic Government of Bahrain track down dissidents.
Second- the White hats- independent freelance hacker groups such as Anonymous/LulzSec. Their aim according to their manifesto is to expose the corruption and greed inherent in the play-books of big business and rogue regimes powered by hyper-capitalism and intent on plundering the natural resources of the planet. They also support whistle-blower groups such as WikiLeaks and social activist groups in general.
Third- the Black hats- with much more clearly defined goals, from overtly criminal to destructive and anarchistic. They are marshalling their attacks primarily on the Midas riches of credit card and financial databases across the globe, at the same time as China and Russia are hacking other Government’s IP, email and trade secrets.
Cyber Hackers now make up a complex substratum of social crime, composed of an ad hoc combination of hackers and security experts, each with a fiercely competitive agenda. But already fragmentation is extending to inter-cyber warfare between these rapidly evolving networks of dysfunctional society, at the same time overlapping with global terrorist groups.
The world's superpowers have already begun to introduce new cyber-policies to desperately protect their intellectual property, infrastructure and financial assets, as well control the flow of information within their populations- but is already bogged down.
The European Convention on Cybercrime is moving at glacial speed because EU governments are reluctant to share sovereign IT information with other powers, even if friendly. The new US Cyber Manifesto has also been stymied. The policy aims to support open access to the Internet while at the same time pursuing a policy of aggressive physical deterrence against any foreign powers such as China and Iran or organisations like WikiLeaks, which attempt to penetrate US computer systems. But this policy is meeting resistance from vested US business interests on issues of regulatory control and government surveillance of business system security.
China on the other hand appears to be going for the jugular. It has established The State Internet Information Office with the express purpose of regulating and controlling its vast Internet population and had even considered building an alternative Internet to sidestep the US controlled ICAAN.
Cybercrime may also be made a lot easier by the ubiquitous application of Cloud technology in the future. Most major corporations and government agencies will be using at least one Cloud to store and process its operational data, leased from Google, Cisco, IBM, Amazon, Microsoft, HP etc. Already several of these clouds including Amazon have been breached and others have had outages. Gaining access to data from a dozen major information sources would be a lot easier than penetrating thousands of individual databases.
Even though most Cloud installations had incorporated security software easily able to ward off rudimentary distributed denial-of-service and hacker attacks, future Cybergent technologies would be much more effective because of superior forensic intelligence.
So the race is on to co-opt the most advanced cyber technology both to gain advantage, but also for prevention. Present day cybercrime technologies however will appear largely primitive within the next few years. The emphasis will shift to the application of much more sophisticated Cyberagent software technology.
The first generation of software agents appeared in the nineties and was used to trawl the Web, applying basic search procedures to locate information resources such as online shopping or travel sites and locating the best prices.
The second generation emerged around five years later. These programs were smarter, incorporating artificial intelligence that enabled them to make decisions more autonomously to meet their operational goals. They were deployed mainly in simulations of interactive population behaviour and interaction in a variety of environments- shopping malls, supply chains as well as disaster and conflict areas. In addition, they possessed superior negotiation and decision logic skills, using Game theory and semantic inferencing techniques.
But the third generation agents will be something else again. These will be based on complementary combinations of advanced AI techniques such as- ‘evolutionary algorithms’, that allow them to constantly improve their skills; 'neural networks' for superior pattern recognition and learning; ‘bayesian logic’ for powerful inferencing capabililty; ‘ant foraging' to help find the most efficient paths through complex network environments and ‘swarm' technology, allowing individual agent intelligence to be amplified by working cooperatively in large groups.
They will increasingly also be capable of tapping into the enormous computational intelligence of the Web, including the public databases of mathematical and scientific algorithms, eventually allowing their intelligence to be amplified by a factor of a hundredfold over previous agent capabilities.
Such agent swarms will also be equipped behaviourally and cognitively to focus on their missions with laser or Zen-like concentration, to the exclusion of everything else, until they have chased down their quarry; whether corporate strategic plans, government covert secrets or nuclear missile blueprints.
This Uber-level of intelligence will transform Agent swarms into formidable cyber strike forces, which could operate under deep cover or in sleeper mode, transforming into harmless chunks of code until a cell and attack was activated and could also replicate rapidly if additional forces were required.
Although this might sound like science fiction, the AI techniques involved, such as evolutionary algorithms, neural networks and swarm architectures have been in common use in business and industry for over ten years. The capacity to harness them in cyber strike force mode is only a matter of time.
But all parties now beginning to understand that the nature of conflict and the balance of world power is shifting with lightning speed, obsoleting overnight the nature of war and traditional economic dominance in a globalised cyber-world. Future conflicts will not be about destroying an enemy armed with billion dollar hi-tech armaments such as tanks, jets and warships, but will be played out largely in future cyberspace.
What value a sophisticated weapons system if it can be disabled by an elite cyber hacker with a Stuxnet-type virus?
What value armies of highly trained soldiers if their command and control centres can be disabled with a few keyboard strokes and a swarm of smart software agents?
What value the trillions of dollars spent on containing Al-Qaeda if the economic and logistical systems supporting the attack can be thrown into disarray by a powerful artificial intelligence algorithm?
But the CEOs of major corporations and military commanders of the major powers are still coming to terms with the mind-blowing ramifications of Cyberwar. Not only would their systems soon be obsolete but so would their command structures.
Adding to the pressure is the impact of global warming and the overuse of the planet’s finite natural resources. Cyberwars are more likely to flourish in times of food and critical resource shortages, with countries and enterprises desperate for inside knowledge to secure access to critical supply information. That time is not far off, with estimates of critical food shortages and rising prices as early as 2013, with a follow on spike in global conflict highly likely.
One thing is certain. From now on Cyberspace will be the new corporate and state battleground and Cybercrime the main risk protagonist.
The threat of all out Cyber war is now an urgent issue that transcends lines between individual enterprises or governments. Unless a global cyber security framework, binding both the private and public sectors can be engineered, a world of disorder will rapidly emerge - a turbulent world, where change has ceased to be beneficial and becomes ultimately destructive.

Friday, July 1, 2011

Future Enterprise- The Knowledge Universe

David Hunter Tow- Director of the Future Enterprise Research Centre contends that the dynamics and evolution of the Knowledge Universe are governed by the laws of physics just as the objects in our physical galaxy and universe.

Our Milky Way is a large barred spiral arm galaxy approximately 100,000 light years across in which we have a pantheon of amazing cosmic objects including- at least 200 billion suns and double that number of planets- some just like earth, black holes- including a massive one at its centre equal in power to 4 million suns, numerous dying or dead stars- burnt out white and brown dwarfs neutron stars and remnants of supernovas, trillions of asteroids and meteorites, and vast clouds of hydrogen gas and other molecules giving birth to new stars.

The dynamic links between these galactic entities are primarily a function of the all-pervasive force of gravity, which warps spacetime, creating black holes and initiating the birth and death of stars.

This incredible menagerie does not function as separate objects therefore, but constitutes a gigantic and complex network in a constant state of evolution, emitting radiation from the longest microwave and infrared to the shortest and most energetic x-ray and gamma wavelengths. In turn it is influenced by the other 100-200 billion galaxies that exist in our universe, which in turn may be influenced by other universes or causal patches in a multiverse.

And our small planet, harbouring perhaps the most advanced life form in the universe, is directly or indirectly influenced by all of them.

Our planet’s emerging Knowledge Universe is analogous to this gigantic network of linked galactic objects; a boundless array of information and knowledge objects connected within the networks of the Internet and Web and controlled by its own physical laws.

Information and knowledge objects evolve in a similar way to stars, planets and black holes by adapting to the laws of physics and information within their environments.

They may be loosely classified in terms of a dozen major categories including-

Knowledge Repositories- databases, data warehouses, data centres and modern-day Clouds; Knowledge Processors and Generators- including a vast array of enterprises, web and social sites, specialist software developers as well as community, social, cultural and scientific groups and institutions. These utilise a range of powerful computing devices increasingly linked to the Internet as well as human minds, interconnected via the Web in the form of a powerful computational intelligence.

In addition there exist a plethora of Knowledge Aggregators, Interpreters and Distributors- news feed publishers in both printed and electronic forms, modern day encyclopedia creators such as Wikipedia, and compilers of mathematical, biological, environmental, economic, financial and demographic statistics; utilising networks of all types- wired and wireless, channelling knowledge between and within objects across the Web.

These and many other knowledge object classes and sub-classes constitute a vast network of networks, constantly combining and morphing in unlimited combinations.

A Cloud for example not only stores information, but may process and transmit it as a service. Likewise a social or gaming network may function as a utility, applying database technology such as SQL and many other software tools; but also may manage its knowledge by storing member details and applications via an internal or external Cloud, distributing services via mobile media devices to its members, advertisers and other processing agents. In turn ubiquitous mobile devices - smart phones and tablets, increasingly perform heavy duty processing and provide significant internal storage as well as wireless transmission connected to other networks.

All these objects have a role to play in the knowledge universe menagerie. And in doing so they’re involved in an evolutionary dance of cosmic proportions. But the thing is, this dance is never going to stop and is accelerating in both volume and complexity.

It is estimated that by 2015 the amount of information will quadruple, generated by vast volumes of video transmission as well as countless new applications from the business, social and science research worlds- measured in petabytes.

Knowledge objects are also similar to and interwoven with the cosmic physical forces of the galaxy as they are born, grow, merge, morph, split, regenerate and die, based on the adaptive pressures of their environments. And more and more end up residing in the free public domain.

The evolution of each object is therefore a function of all other knowledge objects in its galaxy, following its own information laws controlled by physical principles and constraints. These include for example the Laws of thermodynamics and entropy, which define the limits of computation and the conversion of data into knowledge. And Shannon’s Laws, which set limits on information channel capacity and transmission.

The laws of physics also includes those governing information and knowledge flows such as the Action Principle – which defines the shortest and least energy intensive path between objects.

The Least Action Principle postulates that any dynamical process, whether the trajectory of a light ray or orbit of a planet, follows a path of least resistance or one which minimises the 'action' or overall energy expended.

Physicist Richard Feynman showed that quantum theory also incorporates a version of the Action Principle and underlies a vast range of processes from physics to linguistics, communication and biology. The evidence suggests a deep connection between this principle based on energy minimisation and self-organising systems including light waves, information flows and natural system topographies, such as the flow of a river.

Information and knowledge is now flowing seamlessly to every corner of the planet and its populations, mediated by the Internet and Web, reaching even the poorest communities in developing countries via cheap PCs, wireless phones and an increasing variety of other mobile devices.

Trying to block or bypass this flow is a pointless exercise and a sure way to hasten an enterprise’s demise. Essential knowledge may be temporarily blocked for example by patents, which protect IP but in 80% of cases are not applied, but used by large enterprises as a competitive blocking strategy. In the process this may deprive poorer populations of essential products such as life-saving drugs.

But regardless, eventually patents run out or are obsoleted by more advanced technologies. This is happening at an increasing rate in all fields - graphene-based electronics, superconducting materials, genetic-based therapies, green technologies, AI and quantum based computing methods.

Enterprise walled gardens therefore eventually break down or leak like a stone wall surrounding an ancient town, as the technology’s lifetime expires and new developments, opportunities and entrepreneurs emerge. Techniques and technologies across the spectrum of knowledge will continue to spread, expand and link in new ways as they always have, bypassing temporary impediments, because that is the physical reality of information and knowledge.

There are many examples of the recent spread and linking of knowledge objects in galactic orbits within the Knowledge universe including-

The Education Galaxy-

The transfer of knowledge is the basis of the education process and is now providing a global flow of free educational material and resources online, including open access courseware. Free courseware is already offered by a number of prestigious tertiary institutions including- The Massachusetts Institute of Technology, Yale and Harvard, as well as free knowledge reference sites such as Wikipedia. And this will accelerate, becoming pervasive in the near future; making it much cheaper and easier for educational resources to reach previously illiterate societies and communities, instead of being monopolised by traditional institutions such as Universities, particularly as a generational shift takes place.

The Knowledge Universe driven by The Action Principle will by 2040 finally allow the developing world to achieve equal status with the developed world in terms of access to knowledge, training and the realisation of human potential.

The Social Galaxy-

It is predicted there will be thousands of social networks within the Knowledge Universe over the next twenty years apart from the scores that exist today such as Facebook, Linkedin, Google Plus, Badoo, Ning, Academia, Craigslist, Foursquare, Plaxo, Yelp, WiserEarth, Meetup, Mebo, Friendster etc, each catering to the needs of specialised groups.

In the near future these will be seamlessly connected by new applications such as Diaspora, avoiding the walled garden effect and allowing individuals to roam at will across the social universe unimpeded.

The Media Galaxy-

In the Media arena the die has been cast. The older print companies are desperately trying to reposition to face of the online revolution. But by 2015 most print media will be forced to radically adapt towards an online multimedia model. Newspapers are already in turmoil with advertising revenues collapsing as traditional classified streams dry up due to online competition.

Traditional news, both local and global, is rapidly being reduced to a stream of headlines with minimal analysis. Special editions and feature articles will continue in reduced quantity, but online short-burst information- text, video and audio streams will become increasingly popular, distributed via multimedia platforms such as the new generation smart phones, tablets and eBooks, already in common use.

By 2020- traditional free to air television channels will also have largely disappeared, along with many cable channels, with television advertising similarly caught in the headlight glare of change. The switch will be to web channels covering every topic- personalised to individual taste- viewable anywhere, anytime and watched primarily on mobile media screens. The personalised channel will be ubiquitous, with news, information, music and video filtered and customised to suit every personal taste.

All print media including magazines and books will also have followed newspapers to a multimedia model distributed over the Web for flexible viewing. The same already applies to music and video. The power of traditional publishers and creative gatekeepers is now being challenged as online stores such as Amazon, Apple and Google and many smaller companies allow any author, song writer or video producer to self-publish globally and cheaply.

The Cloud Galaxy-

The Cloud is a metaphor for shared infrastructure, software and data storage within the web.

Clouds already support a large range of knowledge environments including- social, cultural, business, energy, financial, office, retail, manufacturing, supply chain, booking, engineering, gaming, music, photo, video, media, communications and scientific applications.

Most of the major service and software providers including IBM, EDS, Apple, Google, Amazon, Yahoo, Microsoft and e-Bay still adopt a walled garden approach, providing access to proprietary databases through proprietary Web Application Programming Interfaces-APIs.

APIs, rely on different ID and access mechanisms as well as data in specific formats for example to support music, video, particle collider and human genome information. Therefore APIs have tended to slice the web into separate sources and silos, restricting its full potential.

However In the future Clouds will become more generic and open using common protocols as enterprises demand greater flexibility. But the next evolutionary phase will offer much more- in particular Data Linking. This will promote the sharing of datasets across diverse domains and between business, research and group partners, bringing the full semantic power of the Web into play and changing the face of business forever.

Tim Berniers Lee’s recent publication of Linked Data Principles for connecting structured data on the web, provides a future blueprint for connecting information from different sources into a single global data repository; accessible by generic data browsers and standard database and query languages. An increasing number of data providers have now begun to implement these Linked Data principles, leading to the creation of an open global data space containing billions of links and coordinated by the World Wide Web Consortium.

And so the trendlines are now becoming clear. The Web is advancing as a multi-dimensional medium for the discovery, generation and linking of knowledge in all its forms, leveraging semantic and artificial intelligence. Individual supplier services will obviously continue to multiply, but enterprises will increasingly demand access to open source data clouds as well as most utility services.

Cloud spaces will continue to blend and split, fragment and reform in unlimited combinations and permutations. They will share data as media organisations already do amongst themselves and with countless news aggregators. The divide lines between public and private ownership of application IP will also become fuzzy, with most applications and algorithms over time converting to generic forms- as many critical software tools such as Linux, Java and SQL.

The Global Commons and Public Domain models therefore will play an increasingly important role. They represent a free sharing knowledge marketplace accessible for the global benefit, where everyone wins as value-added services proliferate. Alternate knowledge and social hubs such as the thousands of Wikipedia lookalikes, controlled by consumer groups, will start to compete with and displace the power of the media and Uber-web enterprises such as Google, which will be forced to cede part of its global knowledge control in its own survival self-interest.

The Web will be controlled by all nations via the global commons in conjunction with a specially constituted body such as the present ICAAN, devolving away from US control.

Many companies have tried to go against the evolutionary flow in the past and paid the price – including GM and Ford which continued to produce large gas-guzzling vehicles. They survived the low carbon/electric vehicle revolution only because of taxpayer largesse.

IBM was another that attempted to force the market to accept its large mainframes- against the trend towards small desktop computers and later the internet. IBM almost died but recovered just in time by embracing software and services, and now leveraging its Smart Planet Strategy.

Microsoft has until recently continued to promote desktop computing against the trend to internet and mobile computing and has been caught flat footed. It may survive as it belatedly adapts its office software to the Internet, but not in its previous dominant position.

Nokia was king of mobile phones but failed to see the shift to smarter phones and applications. It has now been forced to merge to survive, with a low likelihood of ever returning to its glory days.

Oracle, Apple and Facebook are busy building walled gardens. Although looking dominant today their longer term survival will also be in jeopardy if they continue their retro strategy against the flow.

The latest 'Smart Planet' paradigm, in which the infrastructure and processes of the planet- whether manufacturing, supply chains, electricity grids, water pipelines or traffic flows, are being re-engineered to optimise performance and achieve greener, more sustainable outcomes, will be the major driver for the enterprise of the future.

The Smart Planet will also demand that decisions be made more rigorously, efficiently, adaptively and therefore largely autonomously, within a radically new networked architecture.

This will be a major disruptive paradigm for many traditional IT companies which will be forced to redesign their applications and services from the ground up. Those that are too slow will be overtaken by the new generation of nimble system developers, not weighed down by legacy systems. The larger software enterprises in particular will struggle to keep up with the constant flow of knowledge and innovation required to survive, after comfortably dominating their market segment for years, as the cycles of change get shorter and shorter.

The flow of information and knowledge according to physical principles will continue at an accelerating rate, but still many companies will try to continue to swim against the flow to their eventual cost.

Within two decades today’s Internet and Web itself will have split into many alternate distributed but connected network descendants, eventually criss-crossing the knowledge universe and supporting autonomously managed worlds with different processing efficiency and reliability requirements.

Software and system developers and suppliers will need to differentiate their products increasingly as focussed value-added services, targeted to specific enterprises and industries. Service applications will therefore be differentiated primarily by the level of value they contribute to the enterprise- not their generic capability.

Enterprises in turn will need to be very agile, not only because of the exponential rise in the diversity and volume of knowledge, but also its potential for interweaving and creating opportunities in countless applications. They will therefore need to keep acutely tuned to the signals from their environment to survive.

As the Knowledge Universe expands and complexifies as a network of networks, with the spread of information and knowledge according to the laws of physics, enterprises will have only one avenue of escape. That is to continually innovate to generate new knowledge in the form of new products and services before the next wave of science and technology innovation overtakes them; just as electric cars, digital photography and smart phones have already obliterated whole sectors of industry in the blink of an eye.

No enterprise can escape this remorseless race. Better to join it rather than putting up a wall which will inevitably crumble.

They will need to run very hard just to survive- just like the Red Queen.

Monday, April 11, 2011

Future Enterprise- The Big Picture

David Hunter Tow- Director of the Future Enterprise Research Centre argues that seeing the big picture is essential for the survival of the future enterprise.
Seeing the Big Picture- relating the enterprise’s role within its physical and social environment, will become increasingly vital for its survival in the future. It will not be sufficient to plan one, two or five years ahead. Although near-term planning is essential, understanding the big shifts likely to impact our planet and future civilization, will be essential inputs to creative planning, adaptive agility and risk avoidance.
Take Google for example. Many of its acquisitions such as YouTube and Maps were made with the longer term potential in mind rather than short term profits. These were strategic targets that fitted with Google’s general philosophy and could mesh with its long term goals. It was understood that eventually there was a high likelihood of a major payoff and therefore immediate profitability from these acquisitions were not a priority.

Accurately predicting the longer term future has always been seen as problematic, since the Delphi Oracle was shown to have made her forecasts under the influence of laughing gas from an underground aquifer. So there’s been an assumption that’s it’s an impossible mission and why bother as long as the next three to five year profit forecasts are on track.

Most forecasting textbooks traditionally list a number of well-developed techniques based on- time series projections, regression analysis, Delphi and scenario expert options, artificial neural networks and simulation modelling. But these have usually failed miserably to predict the future in times of abrupt change within the broader physical, social or economic environment; such as recent extreme disasters, the global financial crisis or the Arab democratic revolution.

In fact enterprises- even the biggest, have a poor history in seeing the big picture. For example, IBM, didn’t see the looming shift to personal computers and let slip one of the most strategic opportunities in modern times; handing operating software- DOS, ideal for managing desktop computers, to a small startup- Microsoft. And it almost repeated this failure with the advent of the Internet, ignoring its potential until it realised everyone else had embraced the ability to go online to the world. Only its enormous base of mainframe systems saved IBM from oblivion.
And in more recent times there was Ford and GM. Both went virtually bankrupt and had to be bailed out by the US Government because they would not or could not see the obvious shift in consumer sentiment to smaller cars with lower fuel usage. And then there was Lehman Brothers and Citibank and Fanny Mae which also thought they were invincible and too big to fail.
And the list goes on and on. So what’s the problem?

In all the above cases, enterprise management ignored the signals coming loud and clear from their environments via consumers and customers, through a combination of ignorance and arrogance. In the meantime other more agile companies such as Microsoft and Toyota picked up the signals and exploited the opportunities. But then Microsoft almost lost the plot to Google by not seeing the emerging power of the Internet as the dominant driver of information in today’s society.

In other words, the problem is that many companies, particularly those that are dominant in their industry sectors, begin to believe their own rhetoric; that they can manipulate the market according to their whims and wishes, with consumers eventually falling into line, perhaps with an extra push from of a persuasive enough advertising campaign.
This may work in the short term, but if they continue to fail to adapt and evolve, going against the flow and focussing only on their past history through a prism that becomes increasingly self-reflective, such organisations will eventually lose sight of the big picture and reality. This is despite often employing hundreds of strategic analysts, planners, marketing gurus and forecasters, as well as deploying the most advanced computing systems on the planet.

The bigger the enterprise therefore, the more likely it is to live in a bubble of its own making, believing its own internally generated myths. So despite the use of the latest business intelligence software busily scavenging for patterns generated from past customer and financial data, standard industry forecasts and the odd focus group, the analysis will be virtually useless as guide to an uncertain future and as an adaptation tool in the face of looming disasters.

And inevitably without being aware of the bigger but often more subtle shifts in their global environment, such enterprises eventually end up on the edge of a financial precipice without a safety net.

But still many of the latest forecasting trends reinforce this suicidal behaviour by extrapolating trends or variables from past datasets or building scenarios based on narrow parameters.
To understand the future therefore, it will be essential for an enterprise to also understand and be aware of reality at a far deeper level, beyond traditional business boundaries. Many of the most seemingly complex patterns of reality and life are derived from simple rules, based on the science of fractals, chaos, networks, quantum theory, computation and evolution.

Our increased understanding of simple structures such as the human genome allows us to gain exquisite insight into the enormous complexity of life and the cause of many diseases; while understanding chaos and network theory allows us to better manage ecosystems and improve our prediction of disasters- both natural and man-made.

So being able to see the bigger picture and understand its ramifications is vital for the survival of the future enterprise. But how does a system traditionally steeped in conventional narrowly focussed management techniques change its mindset? The major social and physical drivers are not always as obvious as global warming, globalisation or cyber-revolutions.
One critical part of the process involves integrating disparate, often unrelated sources of information and trends across multiple domains of knowledge and expertise. This goes well beyond traditional business intelligence and analytic techniques and comes under the new category of Macroscopic analysis.

The sciences are increasingly using the lens of the macroscope in innovative ways to support collaborative research and gain big picture perspectives in disciplines such as biology, cosmology, ecology and quantum physics. Macroscopes are flexibly updatable combinations or bundles of cyber infrastructure software, algorithms, web services, computing resources and toolkit plug-ins supporting computational analysis and workflows, capable of facilitating the synthesis of vast amounts of research information from thousands of databases around the world. They perform meta-analyses to discover relevant patterns and make predictions to solve critical puzzles such as the genetic causes of cancer and the nature of dark matter. And then going one step further they combine interdisciplinary trends across for example, astronomy and biology, to create new domains such as astrobiology, to determine the likelihood of other life forms existing within the universe.

Now business is also realising the value of applying the benefits of macroscopes to better see the big picture and avoid disaster.
Business macroscopes will provide a much more holistic view of complex information and knowledge sets, detecting significant risks and trends from multiple, often unrelated sources as in the sciences; derived not just from historical enterprise transactions and analyses, but from disciplines that have never before entered the organisation’s lexicon, such as climate change and social networks.

In other words, macroscopes are like giant biological filter feeders, such as whales. Vast flows of water containing micro-organisms and detritus are constantly pumped through the animal’s filtering system and assessed for their value; with only the residues necessary for the animal’s energy and survival retained. It is a largely an automatic process and so it will be for the enterprise once the architecture and parameters have been established. Instead of water, vast flows of complex information and events will be analysed to determine those fluxes most relevant to the organisation’s well-being and survival.

Implementing the macroscope in business will involve integrating it into the fabric of the future enterprise and its IT support systems. This will be a daunting task, but the templates have already been established in the form of the rigorously tested service-oriented architectures such as the Open Services Gateway initiative and Cyber Infrastructure Shell - OSGi/CIShell, which support the interoperability of applications and services by allowing dynamic plug and play integration of independent web service, algorithm and tool components.

Science used to lag the business community in its use of standard tools and innovative computational practice. Now it’s the reverse. Enterprises need to adopt the insight and rigour that science has had to apply to meet the high standards of proof required by society, based on the scientific method.

So now science and business are partners- in lockstep, applying the same computational methods and intelligence to secure their futures and never losing sight of the big picture.

Sunday, January 16, 2011

Future Enterprise- The Future of Work

Director of the Future Enterprise Research Centre-David Hunter Tow, forecasts that within the next two decades, the future architecture guiding the enterprise will dramatically alter traditional work patterns.

By 2020 the traditional notion of an individual's job and work-related role will be recognised as outdated, increasingly mismatched with the fluid requirements of the 21st century. Future productivity outputs will be measured in terms of flexible value-added criteria and contribution to the goals of the organisation linked to social utility, rather than in terms of hours worked on a specific project.

The traditional office will also become redundant as the wireless web expands, allowing information workers- fifty percent of the workforce, to operate from home or local social hubs such as coffee bars, as already occurring- (Ref Future Cities). All such centres will be linked seamlessly via the Internet's multimedia Wireless Grid/Mesh Utility supporting Web and Cloud Infrastructure. This will also enable enormous time and energy savings for workers and the planet in general, having a beneficial impact on the quality of life for millions.

Most tasks, even in the traditional labour-intensive sectors of health, construction, manufacturing and transport will be largely automated or robot-assisted. Projects will be managed and resourced on a real-time basis within the Web's global knowledge network- (Ref Future Web).

Boundaries will then blur between traditional full-time, part-time, contract and volunteering modes of employment as well as between worker and management roles. Most workers will share time between their own creative projects and enterprise applications as already happening, with creativity and innovation recognised as critical work competitive inputs.

Tomorrow's enterprise will be most effectively represented as a decision network model with decisions as nodes and information flows linking the relationships between them. This model offers an extremely powerful mechanism for understanding and optimising the enterprise of the 21st century- extending far beyond current non-adaptive process models.

The enterprise ecosystem’s organisational boundaries and work practices will therefore become increasingly fluid and porous, in synch with the new adaptive network flow architectures. Individuals will move freely between projects, career paths and virtual organisations within the ecosystem; adding value to each enterprise and in turn continuously acquiring new skills, linked to ongoing advanced learning programs. Work patterns will therefore gradually adapt to a model of seamless knowledge flows, generated both by human and web-based algorithms.

The semantic distinctions between workers and management will also disappear with robots performing a large proportion of operational roles without human supervision. The role of unions in the workplace will then have morphed to providing largely advisory, research and cooperative support services.

Concurrently with the above scenarios will be a recognition that the philosophy and architecture of the enterprise of the future will require a major focus on surviving in an increasingly complex environment; requiring the capacity to optimise operations and strategies in shorter and shorter timeframes within a fast changing global cultural, economic, physical and technological environment.

To achieve this goal, artificial and human intelligence will need to merge at both the strategic and operational levels, driven by a need to implement decision-making autonomously with minimal human intervention, as is already occurring in advanced communication and control systems. The genesis of this trend is also becoming apparent in current service-oriented applications including- procurement and supply, resource and financial management and health and lifestyle services, where capitalising on short-term windows of opportunity is paramount.


By 2040, work will relate primarily to the generation of new knowledge and services, by combining human, robot and web intelligence to maximum potential. Most processes will be fully automated both at the operational and strategic level within the context of the Intelligent enterprise. New products and services will be generated from concept to design to production within months, days or hours. Individual creativity and skills will remain in high demand but will increasingly be amplified and modulated within the context of the Web's cooperative decision-making and intelligence capacity.

The survival and success of the enterprise will therefore be contingent on its embedding within the broader cultural environment and norms of the larger community. Business will become an integral component of community culture, with its governance reflecting ethical and sustainable global standards. There will also emerge much greater cooperation rather than competition between enterprises, as globalisation and global warming become the dominant socio-economic drivers.
The days of separating commercial decisions from their social impact will be over.

By 2050, the larger enterprise will evolve as a semi self-organising entity within a larger ecosystem, operating in largely autonomous mode. New knowledge will constantly add value to its evolution, generated through organisational decision processes and knowledge network flows.

The Future Enterprise ecosystem will therefore morph, merge and dissemble in a seamless and endless cycle, generating new processes, knowledge and services to support the global community.

Welcome to a brave new world.

Future Enterprise- The Smart Business case

The Director of the Future Enterprise Research Centre- David Hunter Tow, argues the case for a complete reappraisal of the role of the Business Case and the validity of its current methodology.

There is an endemic structural weakness in today’s business case methodology, which is particularly problematic for Information Technology projects. It arises primarily because of the inability of most enterprises to adequately quantify the benefits relating to investment in new services and technologies.

Since the seventies, business and IT management have been stuck in a mindset which hasn’t changed from the time it became obvious that computer hardware and software was continuing to soak up large amounts of an organization’s capital expenditure budget.

And because of the increasing investment required to computerize the operations of a an organization, it occurred to management that it would be a good idea to offer a business case to justify its introduction. From that point to the present day, the mythology relating to measuring the indirect benefits of this expenditure has grown.
.
At the beginning most justification was comparatively easy. The case for computerizing the early banking, insurance, manufacturing and retail industries could be easily made, by comparing FTE cost savings from redundant staff with the cost of the computer hardware and software and the much smaller number of operations personnel required.

But then came the next generation of computers- client/server distributed systems, networked technologies, real-time operating environments and software that hid the real cost of regular maintenance, customization and upgrades. So it got harder to justify such systems on a cost savings basis alone, once the original legacy back-office savings had been made.

But everyone knew there were major additional benefits associated with up-to-date information and reporting, faster turnaround of accounts, better customer service and improved management decision-making. And from a government perspective, there would be public benefits as well as the quality of service delivery improved.

But how to translate these other ‘soft’, ‘indirect’, ‘intangible’ benefits, which were obvious to everyone, but apparently fiendishly difficult to pin down, into hard cold cash; that could realistically be factored into the ROI.

And then there emerged a rationalization to solve the problem- a dichotomy. The direct ‘tangible benefits’- those offering obvious direct cost savings, like reducing staff or inventory, were the ones that traditional bookkeepers could quantify and management felt comfortable with.
The indirect ‘intangible benefits’- the fuzzy ones, which of course by now were much bigger than the ‘direct benefits’ and could actually justify a major investment, would remain as best estimates. No-one in their right mind would actually attempt to calculate the value derived from improvements in strategic decision-making or customer satisfaction- and then put their signature to it- would they?..

So gradually the mythology of the intangible, incalculable benefit became embedded in the enterprise psyche.

Managers loved it because they could promote their favorite projects without having to seriously justify them. CIOs loved it because any problems relating to the failure of an application to deliver its promised benefits couldn’t be sheeted home to them. Suppliers loved it because that could maximize their sales of the next big thing; sometimes even writing the business case. And if anyone was silly enough to question their integrity, they could check with the other industry lemmings who had invested in the same magic bullet based on a watertight business case and who would never admit to a competitor they had made a monumental investment error.

And lastly, the high priced guru consultancy firms loved it because it was easy to charge an astronomical fee for a complex business case without actually proving the real payoff; and they couldn’t be blamed if the investment turned out to be a dud, because everyone including the CEO had signed off on it. And everyone knew it was impossible to quantify intangibles anyway.

And so the myth of intangible benefits grew. And as more and more technological advances emerged- the internet, software as a service, content integration, virtualization etc, the percentage of hard tangible benefits that could be offset against costs shrank to 20%, then 10%, then 5%, then zero and then wandered off into negative territory.
And not only that, the business case now had to include sustainability and green benefits, many of which also were ‘intangible’.

So lots of sophisticated ‘guestimates’ and fudging with a nod and a wink became the norm and everyone jumped on the bandwagon, from senior management with MBA credentials to junior accountants; all began to succumb to the glib rhetoric, the blind leading the blind.

And this is in an era when the other sciences were going gang-busters- sending orbiters to Mars, decoding the genome, using stem cells to replace organs and AI to smarten the planet’s infrastructure. But of course it was still far too hard and inconvenient to nail the simple science behind quantifying indirect IT benefits.

So to bolster the myth further, the IT business case template was born- a very authoritative document. Just fill in the blanks and let the creative accountants do the rest.
‘What’s you’re best estimate of the benefits realizable from a Business Intelligence, Supply Chain, Marketing or HR system as well as all the other stuff needed to support it; like a new service-oriented architecture, broadband communications network, data warehouses, security software, cloud technology etc.

Well- just pick a number.

But by mid 2000s the fragile house of cards was starting to wobble. The effect of all this ultra-sloppy, lazy accounting was starting to ripple through the enterprise, ending up in the bottom line. Project prioritization, long term planning, essential infrastructure upgrades, all were being distorted- skewed towards projects with short term easy-to-compute benefits, but little else. But now the big-ticket projects, essential to cope with a new world of realtime transaction processing, online sales and automated supply and distribution wouldn’t wait.

Rigorous, realistic intangible benefits analysis is essential to confirm the payoff from these systems- process reengineering to re-energize the organization, improved customer service and pricing to maximize economic value, optimised decision support to leverage knowledge assets and smart infrastructure upgrades to minimize unforeseen disasters.

But on the other side of the universe the environmental and health industries had grasped the nettle thirty years previously and basically solved the problem.

What is the value of a new heart drug? It’s the percentage of lives saved or extended when compared with the old ‘legacy’ or non-existent heart drug. A 10% improvement in lives saved or extended can easily be translated into a tangible increase in productive working hours as well as reduced health care costs. So the reduction in the risk of heart patients dying early becomes the quantifiable benefit and any side effects becomes a cost.

Same with the environment. What are the benefits from the genetic engineering of crops or saving a wetland. If the new genes reduce the potential for disease, then the reduction of risk of crop losses becomes a calculable benefit. If they cause the spread of resistant weeds or insects or can’t handle droughts- then that’s a cost.
If remediating fish spawning wetlands reduces the risk of fish extinction then that’s quantifiable benefit. If it reduces the ability of developers to build more flood prone houses then that’s a public benefit too.

Now back to IT. You say that’s fine for industries like Healthcare and the Environment, where the risks and benefits are obvious. But you can’t translate that approach to trickier stuff like the impact of IT on customer service or management decision-making.

Yes you can!!

The smarter corporate strategists and operations research groups including this Centre have been developing and applying techniques for over twenty years that successfully challenge the ‘intangible benefits’ myth.

They have combined risk theory with decision theory, tweaked it with some additional AI and come up with better enterprise planning, value modeling, system prioritization, evaluation and audit, and service optimization on a continuous basis. The results- a much healthier, profitable and more resilient enterprise.

And this is only the beginning for the future of the dynamic smart business case.

In the 21st century it will be integrated with a host of other new and more science-based planning techniques- risk analysis, forecasting, Bayesian probability networks and AI-based process optimisation algorithms; as the enterprise of the future positions itself to be a largely autonomous entity able to better react, seek new opportunities and re-create itself in a fast-changing and uncertain world.

The smart business case of the future therefore should not be seen as a standalone tool, but as a dynamic and integral part of enterprise planning and modelling. Unless it is applied rigorously, it can distort the whole fabric of the organisation.

Projects and services and products don’t end abruptly. They get absorbed into the fabric of the enterprise as they interweave with other processes, often emerging as part of a new technology or service. The smart business case should therefore be an evolving process also, constantly adjusting to the evolving nature of the enterprise.

It’s therefore high time that the whole crumbling edifice of the mythology of intangible benefits was put to rest and the business case became a lot smarter.

After all- you can’t have a smart enterprise or a smart planet without support from a smart business case.

And it is the 21st century.





.