Director of the Future Enterprise Research Centre-David Hunter Tow, forecasts that within the next two decades, the future architecture guiding the enterprise will dramatically alter traditional work patterns.
By 2020 the traditional notion of an individual's job and work-related role will be recognised as outdated, increasingly mismatched with the fluid requirements of the 21st century. Future productivity outputs will be measured in terms of flexible value-added criteria and contribution to the goals of the organisation linked to social utility, rather than in terms of hours worked on a specific project.
The traditional office will also become redundant as the wireless web expands, allowing information workers- fifty percent of the workforce, to operate from home or local social hubs such as coffee bars, as already occurring- (Ref Future Cities). All such centres will be linked seamlessly via the Internet's multimedia Wireless Grid/Mesh Utility supporting Web and Cloud Infrastructure. This will also enable enormous time and energy savings for workers and the planet in general, having a beneficial impact on the quality of life for millions.
Most tasks, even in the traditional labour-intensive sectors of health, construction, manufacturing and transport will be largely automated or robot-assisted. Projects will be managed and resourced on a real-time basis within the Web's global knowledge network- (Ref Future Web).
Boundaries will then blur between traditional full-time, part-time, contract and volunteering modes of employment as well as between worker and management roles. Most workers will share time between their own creative projects and enterprise applications as already happening, with creativity and innovation recognised as critical work competitive inputs.
Tomorrow's enterprise will be most effectively represented as a decision network model with decisions as nodes and information flows linking the relationships between them. This model offers an extremely powerful mechanism for understanding and optimising the enterprise of the 21st century- extending far beyond current non-adaptive process models.
The enterprise ecosystem’s organisational boundaries and work practices will therefore become increasingly fluid and porous, in synch with the new adaptive network flow architectures. Individuals will move freely between projects, career paths and virtual organisations within the ecosystem; adding value to each enterprise and in turn continuously acquiring new skills, linked to ongoing advanced learning programs. Work patterns will therefore gradually adapt to a model of seamless knowledge flows, generated both by human and web-based algorithms.
The semantic distinctions between workers and management will also disappear with robots performing a large proportion of operational roles without human supervision. The role of unions in the workplace will then have morphed to providing largely advisory, research and cooperative support services.
Concurrently with the above scenarios will be a recognition that the philosophy and architecture of the enterprise of the future will require a major focus on surviving in an increasingly complex environment; requiring the capacity to optimise operations and strategies in shorter and shorter timeframes within a fast changing global cultural, economic, physical and technological environment.
To achieve this goal, artificial and human intelligence will need to merge at both the strategic and operational levels, driven by a need to implement decision-making autonomously with minimal human intervention, as is already occurring in advanced communication and control systems. The genesis of this trend is also becoming apparent in current service-oriented applications including- procurement and supply, resource and financial management and health and lifestyle services, where capitalising on short-term windows of opportunity is paramount.
By 2040, work will relate primarily to the generation of new knowledge and services, by combining human, robot and web intelligence to maximum potential. Most processes will be fully automated both at the operational and strategic level within the context of the Intelligent enterprise. New products and services will be generated from concept to design to production within months, days or hours. Individual creativity and skills will remain in high demand but will increasingly be amplified and modulated within the context of the Web's cooperative decision-making and intelligence capacity.
The survival and success of the enterprise will therefore be contingent on its embedding within the broader cultural environment and norms of the larger community. Business will become an integral component of community culture, with its governance reflecting ethical and sustainable global standards. There will also emerge much greater cooperation rather than competition between enterprises, as globalisation and global warming become the dominant socio-economic drivers.
The days of separating commercial decisions from their social impact will be over.
By 2050, the larger enterprise will evolve as a semi self-organising entity within a larger ecosystem, operating in largely autonomous mode. New knowledge will constantly add value to its evolution, generated through organisational decision processes and knowledge network flows.
The Future Enterprise ecosystem will therefore morph, merge and dissemble in a seamless and endless cycle, generating new processes, knowledge and services to support the global community.
Welcome to a brave new world.
Charting the major social, technological, scientific, environmental and cultural trends driving the evolution of the future enterprise
Sunday, January 16, 2011
Future Enterprise- The Smart Business case
The Director of the Future Enterprise Research Centre- David Hunter Tow, argues the case for a complete reappraisal of the role of the Business Case and the validity of its current methodology.
There is an endemic structural weakness in today’s business case methodology, which is particularly problematic for Information Technology projects. It arises primarily because of the inability of most enterprises to adequately quantify the benefits relating to investment in new services and technologies.
Since the seventies, business and IT management have been stuck in a mindset which hasn’t changed from the time it became obvious that computer hardware and software was continuing to soak up large amounts of an organization’s capital expenditure budget.
And because of the increasing investment required to computerize the operations of a an organization, it occurred to management that it would be a good idea to offer a business case to justify its introduction. From that point to the present day, the mythology relating to measuring the indirect benefits of this expenditure has grown.
.
At the beginning most justification was comparatively easy. The case for computerizing the early banking, insurance, manufacturing and retail industries could be easily made, by comparing FTE cost savings from redundant staff with the cost of the computer hardware and software and the much smaller number of operations personnel required.
But then came the next generation of computers- client/server distributed systems, networked technologies, real-time operating environments and software that hid the real cost of regular maintenance, customization and upgrades. So it got harder to justify such systems on a cost savings basis alone, once the original legacy back-office savings had been made.
But everyone knew there were major additional benefits associated with up-to-date information and reporting, faster turnaround of accounts, better customer service and improved management decision-making. And from a government perspective, there would be public benefits as well as the quality of service delivery improved.
But how to translate these other ‘soft’, ‘indirect’, ‘intangible’ benefits, which were obvious to everyone, but apparently fiendishly difficult to pin down, into hard cold cash; that could realistically be factored into the ROI.
And then there emerged a rationalization to solve the problem- a dichotomy. The direct ‘tangible benefits’- those offering obvious direct cost savings, like reducing staff or inventory, were the ones that traditional bookkeepers could quantify and management felt comfortable with.
The indirect ‘intangible benefits’- the fuzzy ones, which of course by now were much bigger than the ‘direct benefits’ and could actually justify a major investment, would remain as best estimates. No-one in their right mind would actually attempt to calculate the value derived from improvements in strategic decision-making or customer satisfaction- and then put their signature to it- would they?..
So gradually the mythology of the intangible, incalculable benefit became embedded in the enterprise psyche.
Managers loved it because they could promote their favorite projects without having to seriously justify them. CIOs loved it because any problems relating to the failure of an application to deliver its promised benefits couldn’t be sheeted home to them. Suppliers loved it because that could maximize their sales of the next big thing; sometimes even writing the business case. And if anyone was silly enough to question their integrity, they could check with the other industry lemmings who had invested in the same magic bullet based on a watertight business case and who would never admit to a competitor they had made a monumental investment error.
And lastly, the high priced guru consultancy firms loved it because it was easy to charge an astronomical fee for a complex business case without actually proving the real payoff; and they couldn’t be blamed if the investment turned out to be a dud, because everyone including the CEO had signed off on it. And everyone knew it was impossible to quantify intangibles anyway.
And so the myth of intangible benefits grew. And as more and more technological advances emerged- the internet, software as a service, content integration, virtualization etc, the percentage of hard tangible benefits that could be offset against costs shrank to 20%, then 10%, then 5%, then zero and then wandered off into negative territory.
And not only that, the business case now had to include sustainability and green benefits, many of which also were ‘intangible’.
So lots of sophisticated ‘guestimates’ and fudging with a nod and a wink became the norm and everyone jumped on the bandwagon, from senior management with MBA credentials to junior accountants; all began to succumb to the glib rhetoric, the blind leading the blind.
And this is in an era when the other sciences were going gang-busters- sending orbiters to Mars, decoding the genome, using stem cells to replace organs and AI to smarten the planet’s infrastructure. But of course it was still far too hard and inconvenient to nail the simple science behind quantifying indirect IT benefits.
So to bolster the myth further, the IT business case template was born- a very authoritative document. Just fill in the blanks and let the creative accountants do the rest.
‘What’s you’re best estimate of the benefits realizable from a Business Intelligence, Supply Chain, Marketing or HR system as well as all the other stuff needed to support it; like a new service-oriented architecture, broadband communications network, data warehouses, security software, cloud technology etc.
Well- just pick a number.
But by mid 2000s the fragile house of cards was starting to wobble. The effect of all this ultra-sloppy, lazy accounting was starting to ripple through the enterprise, ending up in the bottom line. Project prioritization, long term planning, essential infrastructure upgrades, all were being distorted- skewed towards projects with short term easy-to-compute benefits, but little else. But now the big-ticket projects, essential to cope with a new world of realtime transaction processing, online sales and automated supply and distribution wouldn’t wait.
Rigorous, realistic intangible benefits analysis is essential to confirm the payoff from these systems- process reengineering to re-energize the organization, improved customer service and pricing to maximize economic value, optimised decision support to leverage knowledge assets and smart infrastructure upgrades to minimize unforeseen disasters.
But on the other side of the universe the environmental and health industries had grasped the nettle thirty years previously and basically solved the problem.
What is the value of a new heart drug? It’s the percentage of lives saved or extended when compared with the old ‘legacy’ or non-existent heart drug. A 10% improvement in lives saved or extended can easily be translated into a tangible increase in productive working hours as well as reduced health care costs. So the reduction in the risk of heart patients dying early becomes the quantifiable benefit and any side effects becomes a cost.
Same with the environment. What are the benefits from the genetic engineering of crops or saving a wetland. If the new genes reduce the potential for disease, then the reduction of risk of crop losses becomes a calculable benefit. If they cause the spread of resistant weeds or insects or can’t handle droughts- then that’s a cost.
If remediating fish spawning wetlands reduces the risk of fish extinction then that’s quantifiable benefit. If it reduces the ability of developers to build more flood prone houses then that’s a public benefit too.
Now back to IT. You say that’s fine for industries like Healthcare and the Environment, where the risks and benefits are obvious. But you can’t translate that approach to trickier stuff like the impact of IT on customer service or management decision-making.
Yes you can!!
The smarter corporate strategists and operations research groups including this Centre have been developing and applying techniques for over twenty years that successfully challenge the ‘intangible benefits’ myth.
They have combined risk theory with decision theory, tweaked it with some additional AI and come up with better enterprise planning, value modeling, system prioritization, evaluation and audit, and service optimization on a continuous basis. The results- a much healthier, profitable and more resilient enterprise.
And this is only the beginning for the future of the dynamic smart business case.
In the 21st century it will be integrated with a host of other new and more science-based planning techniques- risk analysis, forecasting, Bayesian probability networks and AI-based process optimisation algorithms; as the enterprise of the future positions itself to be a largely autonomous entity able to better react, seek new opportunities and re-create itself in a fast-changing and uncertain world.
The smart business case of the future therefore should not be seen as a standalone tool, but as a dynamic and integral part of enterprise planning and modelling. Unless it is applied rigorously, it can distort the whole fabric of the organisation.
Projects and services and products don’t end abruptly. They get absorbed into the fabric of the enterprise as they interweave with other processes, often emerging as part of a new technology or service. The smart business case should therefore be an evolving process also, constantly adjusting to the evolving nature of the enterprise.
It’s therefore high time that the whole crumbling edifice of the mythology of intangible benefits was put to rest and the business case became a lot smarter.
After all- you can’t have a smart enterprise or a smart planet without support from a smart business case.
And it is the 21st century.
.
There is an endemic structural weakness in today’s business case methodology, which is particularly problematic for Information Technology projects. It arises primarily because of the inability of most enterprises to adequately quantify the benefits relating to investment in new services and technologies.
Since the seventies, business and IT management have been stuck in a mindset which hasn’t changed from the time it became obvious that computer hardware and software was continuing to soak up large amounts of an organization’s capital expenditure budget.
And because of the increasing investment required to computerize the operations of a an organization, it occurred to management that it would be a good idea to offer a business case to justify its introduction. From that point to the present day, the mythology relating to measuring the indirect benefits of this expenditure has grown.
.
At the beginning most justification was comparatively easy. The case for computerizing the early banking, insurance, manufacturing and retail industries could be easily made, by comparing FTE cost savings from redundant staff with the cost of the computer hardware and software and the much smaller number of operations personnel required.
But then came the next generation of computers- client/server distributed systems, networked technologies, real-time operating environments and software that hid the real cost of regular maintenance, customization and upgrades. So it got harder to justify such systems on a cost savings basis alone, once the original legacy back-office savings had been made.
But everyone knew there were major additional benefits associated with up-to-date information and reporting, faster turnaround of accounts, better customer service and improved management decision-making. And from a government perspective, there would be public benefits as well as the quality of service delivery improved.
But how to translate these other ‘soft’, ‘indirect’, ‘intangible’ benefits, which were obvious to everyone, but apparently fiendishly difficult to pin down, into hard cold cash; that could realistically be factored into the ROI.
And then there emerged a rationalization to solve the problem- a dichotomy. The direct ‘tangible benefits’- those offering obvious direct cost savings, like reducing staff or inventory, were the ones that traditional bookkeepers could quantify and management felt comfortable with.
The indirect ‘intangible benefits’- the fuzzy ones, which of course by now were much bigger than the ‘direct benefits’ and could actually justify a major investment, would remain as best estimates. No-one in their right mind would actually attempt to calculate the value derived from improvements in strategic decision-making or customer satisfaction- and then put their signature to it- would they?..
So gradually the mythology of the intangible, incalculable benefit became embedded in the enterprise psyche.
Managers loved it because they could promote their favorite projects without having to seriously justify them. CIOs loved it because any problems relating to the failure of an application to deliver its promised benefits couldn’t be sheeted home to them. Suppliers loved it because that could maximize their sales of the next big thing; sometimes even writing the business case. And if anyone was silly enough to question their integrity, they could check with the other industry lemmings who had invested in the same magic bullet based on a watertight business case and who would never admit to a competitor they had made a monumental investment error.
And lastly, the high priced guru consultancy firms loved it because it was easy to charge an astronomical fee for a complex business case without actually proving the real payoff; and they couldn’t be blamed if the investment turned out to be a dud, because everyone including the CEO had signed off on it. And everyone knew it was impossible to quantify intangibles anyway.
And so the myth of intangible benefits grew. And as more and more technological advances emerged- the internet, software as a service, content integration, virtualization etc, the percentage of hard tangible benefits that could be offset against costs shrank to 20%, then 10%, then 5%, then zero and then wandered off into negative territory.
And not only that, the business case now had to include sustainability and green benefits, many of which also were ‘intangible’.
So lots of sophisticated ‘guestimates’ and fudging with a nod and a wink became the norm and everyone jumped on the bandwagon, from senior management with MBA credentials to junior accountants; all began to succumb to the glib rhetoric, the blind leading the blind.
And this is in an era when the other sciences were going gang-busters- sending orbiters to Mars, decoding the genome, using stem cells to replace organs and AI to smarten the planet’s infrastructure. But of course it was still far too hard and inconvenient to nail the simple science behind quantifying indirect IT benefits.
So to bolster the myth further, the IT business case template was born- a very authoritative document. Just fill in the blanks and let the creative accountants do the rest.
‘What’s you’re best estimate of the benefits realizable from a Business Intelligence, Supply Chain, Marketing or HR system as well as all the other stuff needed to support it; like a new service-oriented architecture, broadband communications network, data warehouses, security software, cloud technology etc.
Well- just pick a number.
But by mid 2000s the fragile house of cards was starting to wobble. The effect of all this ultra-sloppy, lazy accounting was starting to ripple through the enterprise, ending up in the bottom line. Project prioritization, long term planning, essential infrastructure upgrades, all were being distorted- skewed towards projects with short term easy-to-compute benefits, but little else. But now the big-ticket projects, essential to cope with a new world of realtime transaction processing, online sales and automated supply and distribution wouldn’t wait.
Rigorous, realistic intangible benefits analysis is essential to confirm the payoff from these systems- process reengineering to re-energize the organization, improved customer service and pricing to maximize economic value, optimised decision support to leverage knowledge assets and smart infrastructure upgrades to minimize unforeseen disasters.
But on the other side of the universe the environmental and health industries had grasped the nettle thirty years previously and basically solved the problem.
What is the value of a new heart drug? It’s the percentage of lives saved or extended when compared with the old ‘legacy’ or non-existent heart drug. A 10% improvement in lives saved or extended can easily be translated into a tangible increase in productive working hours as well as reduced health care costs. So the reduction in the risk of heart patients dying early becomes the quantifiable benefit and any side effects becomes a cost.
Same with the environment. What are the benefits from the genetic engineering of crops or saving a wetland. If the new genes reduce the potential for disease, then the reduction of risk of crop losses becomes a calculable benefit. If they cause the spread of resistant weeds or insects or can’t handle droughts- then that’s a cost.
If remediating fish spawning wetlands reduces the risk of fish extinction then that’s quantifiable benefit. If it reduces the ability of developers to build more flood prone houses then that’s a public benefit too.
Now back to IT. You say that’s fine for industries like Healthcare and the Environment, where the risks and benefits are obvious. But you can’t translate that approach to trickier stuff like the impact of IT on customer service or management decision-making.
Yes you can!!
The smarter corporate strategists and operations research groups including this Centre have been developing and applying techniques for over twenty years that successfully challenge the ‘intangible benefits’ myth.
They have combined risk theory with decision theory, tweaked it with some additional AI and come up with better enterprise planning, value modeling, system prioritization, evaluation and audit, and service optimization on a continuous basis. The results- a much healthier, profitable and more resilient enterprise.
And this is only the beginning for the future of the dynamic smart business case.
In the 21st century it will be integrated with a host of other new and more science-based planning techniques- risk analysis, forecasting, Bayesian probability networks and AI-based process optimisation algorithms; as the enterprise of the future positions itself to be a largely autonomous entity able to better react, seek new opportunities and re-create itself in a fast-changing and uncertain world.
The smart business case of the future therefore should not be seen as a standalone tool, but as a dynamic and integral part of enterprise planning and modelling. Unless it is applied rigorously, it can distort the whole fabric of the organisation.
Projects and services and products don’t end abruptly. They get absorbed into the fabric of the enterprise as they interweave with other processes, often emerging as part of a new technology or service. The smart business case should therefore be an evolving process also, constantly adjusting to the evolving nature of the enterprise.
It’s therefore high time that the whole crumbling edifice of the mythology of intangible benefits was put to rest and the business case became a lot smarter.
After all- you can’t have a smart enterprise or a smart planet without support from a smart business case.
And it is the 21st century.
.
Tuesday, November 9, 2010
Future Enterprise- The Intelligent Enterprise
The enterprise of the future will increasingly depend on a wide range of rigorous artificial intelligence systems, algorithms and techniques to facilitate its operation at all levels of management.
As described in The Adaptable Enterprise blog, major decisions incorporating sophisticated levels of intelligent problem-solving will increasingly be applied autonomously and within real time constraints to achieve the level of adaptability required to survive in an ever changing and uncertain global environment. This trendline describes these techniques and their application.
A number of artificial techniques and algorithms are rapidly reaching maturity and will be an essential component of Intelligent Enterprise Architecture of the future including:
Genetic algorithms- solution discovery and optimisation modelled on the genetic operators of cross over, replication and mutation to explore generations of parameterised options.
Bayesian networks- graphical models representing multivariate probability networks; providing inference and learning based on cumulative evidence.
Fuzzy Logic- non-binary methods of decision-making -allowing information inputs to be weighted and an activation threshold established.
Swarm Intelligence- combining multiple components to achieve group intelligent behaviour.
Neural networks- pattern discrimination techniques modelled on neuron connection.
Expert Systems- rule based inference techniques targeted at specific problem areas.
Intelligent Agents- this form of AI is particularly relevant to the future enterprise architecture, because it is designed to be adaptive to the web's dynamic environment; that is, an agent is designed to learn by experience. They can also act collaboratively in societies, groups or swarms. Through swarming behaviour agents can achieve higher levels of intelligence capable of making increasingly complex decisions autonomously
The above techniques will continue to be enhanced and packaged in different combinations to provide immensely powerful problem solving capability over time. The technology is slowly being applied discretely within business intelligence, data mining and planning functions of enterprise systems.
However AI is yet to realize its full potential within the enterprise model by being applied to decision-making in a targeted autonomous fashion. When this happens over the next decade, the quality of decision-making and concommitant reduction in operational and amanagement risk is likely to be significantly improved.
As described in The Adaptable Enterprise blog, major decisions incorporating sophisticated levels of intelligent problem-solving will increasingly be applied autonomously and within real time constraints to achieve the level of adaptability required to survive in an ever changing and uncertain global environment. This trendline describes these techniques and their application.
A number of artificial techniques and algorithms are rapidly reaching maturity and will be an essential component of Intelligent Enterprise Architecture of the future including:
Genetic algorithms- solution discovery and optimisation modelled on the genetic operators of cross over, replication and mutation to explore generations of parameterised options.
Bayesian networks- graphical models representing multivariate probability networks; providing inference and learning based on cumulative evidence.
Fuzzy Logic- non-binary methods of decision-making -allowing information inputs to be weighted and an activation threshold established.
Swarm Intelligence- combining multiple components to achieve group intelligent behaviour.
Neural networks- pattern discrimination techniques modelled on neuron connection.
Expert Systems- rule based inference techniques targeted at specific problem areas.
Intelligent Agents- this form of AI is particularly relevant to the future enterprise architecture, because it is designed to be adaptive to the web's dynamic environment; that is, an agent is designed to learn by experience. They can also act collaboratively in societies, groups or swarms. Through swarming behaviour agents can achieve higher levels of intelligence capable of making increasingly complex decisions autonomously
The above techniques will continue to be enhanced and packaged in different combinations to provide immensely powerful problem solving capability over time. The technology is slowly being applied discretely within business intelligence, data mining and planning functions of enterprise systems.
However AI is yet to realize its full potential within the enterprise model by being applied to decision-making in a targeted autonomous fashion. When this happens over the next decade, the quality of decision-making and concommitant reduction in operational and amanagement risk is likely to be significantly improved.
Monday, November 1, 2010
Future Enterprise- Cyber-Infrastructure for World 2.0
Our future World 2.0 will face enormous challenges from now into the foreseeable future, including global warming, globalisation and social and business hyper-change.
Global Warming will create shortages of food and water and loss of critical ecosystems and species. It will require massive prioritisation and re-allocation of resources on a global scale.
Globalisation will require humans to live and work together cooperatively as one species on one planet- essential for our survival and finally eliminating the enormous destruction and loss of life that wars and conflict inevitably bring.
Social and usiness change will present myriad challenges relating to building and maintaining a cohesive social fabric to provide democracy and justice, adequate levels of health and education, solutions to urban expansion, crime prevention, transport congestion and food and water security, in a fast changing global environment. This will require adaptation on a vast scale.
It is apparent that in order to meet these challenges, humans must harness the enormous advances in computing and communications technologies to achieve a complete makeover of the world’s Cyber-Infrastructure.
The infrastructure of the new cyber reality now affects every aspect of our civilisation. In tomorrow’s globalised world a dense mesh of super-networks will be required to service society’s needs- the ability to conduct government, business, education, health, research and development at the highest quality standard.
This infrastructure will be co-joined with the intelligent Internet/web, but will require additional innovation to facilitate its operation; a transparent and adaptable heterogeneous network of networks, interoperable at all levels of society.
In the last two decades tremendous progress has been made in the application of high-performance and distributed computer systems including complex software to manage and apply super-clusters, large scale grids, computational clouds and sensor-driven self-organising mobile systems. This will continue unabated, making the goal of providing ubiquitous and efficient computing on a worldwide scale possible.
But there’s a long road ahead. It is still difficult to combine multiple disparate systems to perform a single distributed application. Each cluster, grid and cloud provides its own set of access protocols, programming interfaces, security mechanisms and middleware to facilitate access to its resources. Attempting to combine multiple homogeneous software and hardware configurations in a seamless heterogeneous distributed system is still largely beyond our capability.
At the same time tomorrow’s World 2.0 enabling infrastructure, must also be designed to cope with sustainability and security issues.
It is estimated that The ICT industry contributes 2-3% of total Greenhouse Gas emissions, growing 6% per year compounded. If this trend continues, total emissions could triple by 2020. The next generation cyber-architecture therefore needs to be more power-adaptive. Coupled with machine learning this could achieve savings of up to 70 % of total ICT Greenhouse emissions by 2020.
But the world is also grappling with the possibility of cyber-warfare as well as increasingly sophisticated criminal hacking, with an estimated 100 foreign intelligence organisations trying to break into US networks. A global protocol safeguarding cyber privacy rights between nations, combined with greater predictive warning of rogue attacks, is critically needed. The next generation of cyber-infrastructure will therefore have to incorporate autonomous intelligence and resilience in the face of both these challenges.
To meet these targets a lot will ride on future advances in the field of Self-Aware Networks- SANs. Previous blogs have emphasised the emergence of the networked enterprise as the next stage in advanced decision-making. SANs are a key evolutionary step on the path to this goal. Self-aware networks can be wired, wireless or peer-to-peer, allowing individual nodes to discover the presence of other nodes and links as required- largely autonomously. Packets of information can be forwarded to any node without traditional network routing tables, based on reinforcement learning and smart routing algorithms, resulting in reduced response times, traffic densities, noise and energy consumption.
Another major shift towards a networked world has been the rise of Social Networks. These have attracted billions of users for networking applications such as Facebook, LinkedIn, Twitter etc. These are providing the early social glue for World 2.0, offering pervasive connectivity by processing and sharing multi-media content. Together with smart portable devices, they cater to the user’s every desire, through hundreds of thousands of web applications covering all aspects of social experience– entertainment, lifestyle, finance, health, news, reference and utility management etc.
With increased user mobility, location sharing and a desire to always be connected, there is a growing trend towards personalized networks where body, home, urban and vehicle sensory inputs will be linked in densely connected meshes to intermediate specialised networks supporting healthcare, shopping, banking etc.
The explosion of social networked communities is triggering new interest in collaborative systems in general. Recent research in network science has made a significant contribution to a more profound understanding of collaborative behaviour in business ecosystems. As discussed in previous posts, networked ‘swarm’ behaviour can demonstrate an increase in collective intelligence. Such collective synergy in complex self-organising systems allows ‘smarter’ problem solving as well as greater decision agility. By linking together in strategic and operational networks, enterprises can therefore achieve superior performance than was previously possible.
The key characteristics of the smart business network of the future will be its ability to react rapidly to emerging opportunities or threats, by selecting and linking appropriate business processes. Such networks will be capable of quickly and opportunistically connecting and disconnecting relationship nodes, establishing business rules for participating members on the basis of risk and reward.
This ‘on the fly’ capacity to reconfigure operational rules, will be a crucial dynamic governing the success of tomorrow’s enterprise. CIOs must also learn to span the architectural boundaries between their own networked organisation and the increasingly complex social and economic networked ecosystems in which their organisations are embedded.
In fact the business community is now struggling to keep up with the continuous rate of innovation demanded by its users. Social network solutions have the potential to help meet this demand by shaping the design of future architectures to provide better ways to secure distributed systems.
So what is the future of this new collaborative, densely configured networked world? What we are witnessing is the inter-weaving of a vast number of evolving and increasingly autonomous networks, binding our civilisation in a web of computational nodes and relational connections, spanning personal to global interactions.
By 2050 the new World 2.0 cyber-infrastructure will link most individuals, enterprises and communities on the planet. Each will have a role to play in our networked future, as the cells of our brain do- but it will be a future in which the sum of the connected whole will also be an active player.
Global Warming will create shortages of food and water and loss of critical ecosystems and species. It will require massive prioritisation and re-allocation of resources on a global scale.
Globalisation will require humans to live and work together cooperatively as one species on one planet- essential for our survival and finally eliminating the enormous destruction and loss of life that wars and conflict inevitably bring.
Social and usiness change will present myriad challenges relating to building and maintaining a cohesive social fabric to provide democracy and justice, adequate levels of health and education, solutions to urban expansion, crime prevention, transport congestion and food and water security, in a fast changing global environment. This will require adaptation on a vast scale.
It is apparent that in order to meet these challenges, humans must harness the enormous advances in computing and communications technologies to achieve a complete makeover of the world’s Cyber-Infrastructure.
The infrastructure of the new cyber reality now affects every aspect of our civilisation. In tomorrow’s globalised world a dense mesh of super-networks will be required to service society’s needs- the ability to conduct government, business, education, health, research and development at the highest quality standard.
This infrastructure will be co-joined with the intelligent Internet/web, but will require additional innovation to facilitate its operation; a transparent and adaptable heterogeneous network of networks, interoperable at all levels of society.
In the last two decades tremendous progress has been made in the application of high-performance and distributed computer systems including complex software to manage and apply super-clusters, large scale grids, computational clouds and sensor-driven self-organising mobile systems. This will continue unabated, making the goal of providing ubiquitous and efficient computing on a worldwide scale possible.
But there’s a long road ahead. It is still difficult to combine multiple disparate systems to perform a single distributed application. Each cluster, grid and cloud provides its own set of access protocols, programming interfaces, security mechanisms and middleware to facilitate access to its resources. Attempting to combine multiple homogeneous software and hardware configurations in a seamless heterogeneous distributed system is still largely beyond our capability.
At the same time tomorrow’s World 2.0 enabling infrastructure, must also be designed to cope with sustainability and security issues.
It is estimated that The ICT industry contributes 2-3% of total Greenhouse Gas emissions, growing 6% per year compounded. If this trend continues, total emissions could triple by 2020. The next generation cyber-architecture therefore needs to be more power-adaptive. Coupled with machine learning this could achieve savings of up to 70 % of total ICT Greenhouse emissions by 2020.
But the world is also grappling with the possibility of cyber-warfare as well as increasingly sophisticated criminal hacking, with an estimated 100 foreign intelligence organisations trying to break into US networks. A global protocol safeguarding cyber privacy rights between nations, combined with greater predictive warning of rogue attacks, is critically needed. The next generation of cyber-infrastructure will therefore have to incorporate autonomous intelligence and resilience in the face of both these challenges.
To meet these targets a lot will ride on future advances in the field of Self-Aware Networks- SANs. Previous blogs have emphasised the emergence of the networked enterprise as the next stage in advanced decision-making. SANs are a key evolutionary step on the path to this goal. Self-aware networks can be wired, wireless or peer-to-peer, allowing individual nodes to discover the presence of other nodes and links as required- largely autonomously. Packets of information can be forwarded to any node without traditional network routing tables, based on reinforcement learning and smart routing algorithms, resulting in reduced response times, traffic densities, noise and energy consumption.
Another major shift towards a networked world has been the rise of Social Networks. These have attracted billions of users for networking applications such as Facebook, LinkedIn, Twitter etc. These are providing the early social glue for World 2.0, offering pervasive connectivity by processing and sharing multi-media content. Together with smart portable devices, they cater to the user’s every desire, through hundreds of thousands of web applications covering all aspects of social experience– entertainment, lifestyle, finance, health, news, reference and utility management etc.
With increased user mobility, location sharing and a desire to always be connected, there is a growing trend towards personalized networks where body, home, urban and vehicle sensory inputs will be linked in densely connected meshes to intermediate specialised networks supporting healthcare, shopping, banking etc.
The explosion of social networked communities is triggering new interest in collaborative systems in general. Recent research in network science has made a significant contribution to a more profound understanding of collaborative behaviour in business ecosystems. As discussed in previous posts, networked ‘swarm’ behaviour can demonstrate an increase in collective intelligence. Such collective synergy in complex self-organising systems allows ‘smarter’ problem solving as well as greater decision agility. By linking together in strategic and operational networks, enterprises can therefore achieve superior performance than was previously possible.
The key characteristics of the smart business network of the future will be its ability to react rapidly to emerging opportunities or threats, by selecting and linking appropriate business processes. Such networks will be capable of quickly and opportunistically connecting and disconnecting relationship nodes, establishing business rules for participating members on the basis of risk and reward.
This ‘on the fly’ capacity to reconfigure operational rules, will be a crucial dynamic governing the success of tomorrow’s enterprise. CIOs must also learn to span the architectural boundaries between their own networked organisation and the increasingly complex social and economic networked ecosystems in which their organisations are embedded.
In fact the business community is now struggling to keep up with the continuous rate of innovation demanded by its users. Social network solutions have the potential to help meet this demand by shaping the design of future architectures to provide better ways to secure distributed systems.
So what is the future of this new collaborative, densely configured networked world? What we are witnessing is the inter-weaving of a vast number of evolving and increasingly autonomous networks, binding our civilisation in a web of computational nodes and relational connections, spanning personal to global interactions.
By 2050 the new World 2.0 cyber-infrastructure will link most individuals, enterprises and communities on the planet. Each will have a role to play in our networked future, as the cells of our brain do- but it will be a future in which the sum of the connected whole will also be an active player.
Friday, June 25, 2010
Future Enterprise- The Greening System
The net energy impact of an enterprise’s products and services on the community far outweighs the benefits of any savings in its computer processing operations.
Saving energy in the 21st century’s computing ecosystem is a vital component in achieving the goal of a sustainable society and is currently being addressed within the context of numerous emerging technologies including- flexible cloud processing, low-energy mobile and sensor communications, outsourcing of services, infrastructure virtualisation, application integration, embedded electronics and low energy processor design.
But of far more significance is the potential role of information and computing technology in reducing carbon emissions in most of today’s service processes- whether relating to power generation, manufacturing, transport, service delivery etc.
This revolution, using the computer as the most effective green machine ever designed, is rapidly taking shape with the emergence of the ‘smarter planet’ mantra. This has already been adopted by every major systems and software provider including- IBM, Cisco, Google, SAP, Apple, Intel, Microsoft and Oracle and promises the optimisation of the planet’s infrastructure.
This will presage more efficient healthcare, education, communication, utility and government services, as well as higher quality industry outcomes in construction, mining, travel, engineering, agriculture etc, by applying the latest advances in artificial intelligence, design, materials, electronics, computing and control sciences.
As well as the enormous energy reduction payoffs of smarter infrastructure, the ‘smarter planet’ will manifest in a limitless number of areas including-
Simulation-based Engineering- solving previously intractable design problems and achieving significant cost and energy reductions by applying computer simulated models and prototypes for testing purposes:
Transportation Systems- managing major traffic flows and supply chains, which will demand increasingly complex integration and scheduling via multi-modal transport networks:
Developing Nations Environments - allowing the populations of these countries to join the developed networked knowledge world and gain leverage through the application of cheap sensors and low cost intelligent mobile devices to help solve complex environmental and resource allocation problems.
Such global energy reduction potential, gained by using the computer to generate overall outcome savings are indisputable and in fact totally dwarf the benefits gained from optimising computer processing as an end in itself.
But greater sustainability benefits are also conditional on the performance and effectiveness of computer processing, with real-time, event-driven applications becoming increasingly common. Computer processing energy gains must therefore evolve within the constraints of process performance needs. Higher performance processing may be more energy intensive, but still deliver far greater benefits in terms of outcome energy savings; so that deriving an optimum trade-off between energy input efficiency and performance output efficiency will be critical.
But an even more significant energy paradigm is emerging, which encompasses the capacity of the enterprise to deliver the sustainable benefits of its services to the wider community.
In the final analysis it is the enterprise that is the primary implementer of services to its customers- whether individuals or businesses. These are the beneficiaries or otherwise of its products and services.
A General Motors that keeps churning out gas-guzzling vehicles, totally unsuited to a greener environment and its customer’s needs, may do major harm to the planet no matter how efficient or sophisticated its computerised operational systems.
What this boils down to is the role of the future enterprise as the most relevant greening system in relation to the communities it services. It is the enterprise- small, large, public or private, which is the key enabling system to achieving a greener world.
Tomorrow’s enterprise will be the primary harnesser of human mind power, amplified by expanding computational intelligence in our world. Its potential therefore to create a greener future through its impact on the wellbeing of the wider community is what ultimately should be assessed as its true value to society.
Saving energy in the 21st century’s computing ecosystem is a vital component in achieving the goal of a sustainable society and is currently being addressed within the context of numerous emerging technologies including- flexible cloud processing, low-energy mobile and sensor communications, outsourcing of services, infrastructure virtualisation, application integration, embedded electronics and low energy processor design.
But of far more significance is the potential role of information and computing technology in reducing carbon emissions in most of today’s service processes- whether relating to power generation, manufacturing, transport, service delivery etc.
This revolution, using the computer as the most effective green machine ever designed, is rapidly taking shape with the emergence of the ‘smarter planet’ mantra. This has already been adopted by every major systems and software provider including- IBM, Cisco, Google, SAP, Apple, Intel, Microsoft and Oracle and promises the optimisation of the planet’s infrastructure.
This will presage more efficient healthcare, education, communication, utility and government services, as well as higher quality industry outcomes in construction, mining, travel, engineering, agriculture etc, by applying the latest advances in artificial intelligence, design, materials, electronics, computing and control sciences.
As well as the enormous energy reduction payoffs of smarter infrastructure, the ‘smarter planet’ will manifest in a limitless number of areas including-
Simulation-based Engineering- solving previously intractable design problems and achieving significant cost and energy reductions by applying computer simulated models and prototypes for testing purposes:
Transportation Systems- managing major traffic flows and supply chains, which will demand increasingly complex integration and scheduling via multi-modal transport networks:
Developing Nations Environments - allowing the populations of these countries to join the developed networked knowledge world and gain leverage through the application of cheap sensors and low cost intelligent mobile devices to help solve complex environmental and resource allocation problems.
Such global energy reduction potential, gained by using the computer to generate overall outcome savings are indisputable and in fact totally dwarf the benefits gained from optimising computer processing as an end in itself.
But greater sustainability benefits are also conditional on the performance and effectiveness of computer processing, with real-time, event-driven applications becoming increasingly common. Computer processing energy gains must therefore evolve within the constraints of process performance needs. Higher performance processing may be more energy intensive, but still deliver far greater benefits in terms of outcome energy savings; so that deriving an optimum trade-off between energy input efficiency and performance output efficiency will be critical.
But an even more significant energy paradigm is emerging, which encompasses the capacity of the enterprise to deliver the sustainable benefits of its services to the wider community.
In the final analysis it is the enterprise that is the primary implementer of services to its customers- whether individuals or businesses. These are the beneficiaries or otherwise of its products and services.
A General Motors that keeps churning out gas-guzzling vehicles, totally unsuited to a greener environment and its customer’s needs, may do major harm to the planet no matter how efficient or sophisticated its computerised operational systems.
What this boils down to is the role of the future enterprise as the most relevant greening system in relation to the communities it services. It is the enterprise- small, large, public or private, which is the key enabling system to achieving a greener world.
Tomorrow’s enterprise will be the primary harnesser of human mind power, amplified by expanding computational intelligence in our world. Its potential therefore to create a greener future through its impact on the wellbeing of the wider community is what ultimately should be assessed as its true value to society.
Friday, April 30, 2010
Future Enterprise- Future Brain Architecture
Is today’s enterprise, including its IT acolytes, missing something very obvious and vitally important in its current management mindset or is it just an inability by a traditionally conservative constituency, to accept the radical paradigm shift involved?
Enterprise IT is beginning to dip its toe in the water and borrow some of its inspiration from biological models. For example, a number of the most valuable AI techniques routinely applied in business- genetic algorithms, neutral networks, DNA and swarm computation, are biologically based, as is the concept of the organisation as a complex ecosystem, rather than a rigid hierarchical structure, largely disconnected from its environment.
Networks are also getting a look-in. Complex decision-making, using elements of autonomous, self-organising and intelligent networks, incorporating complex feedback loops to monitor operational performance and enhance relationships with customers and suppliers, are now being trialled.
But the current enterprise management model is still missing the big picture- the shift towards an efficient, self-regulating, self-organising, self-evolving framework, so critical for survival in a future fast-moving, uncertain physical and social environment.
The most efficient blueprint for such an architecture and one honed over billions of years and governing all animal life, is the living brain; in particular the advanced human brain.
For the last thirty years, since the advent of computerised imaging techniques, scientists have been trying to prise open the secrets of the brain’s incredible power and flexibility. Not just how it computes so efficiently, but its ability to adapt, evolve and manage its 100 billion neurons and dozens of specialised structures, as well as all the relationships of the body’s incredibly rich cellular processes, organs and bio-systems. It has also mastered the capacity to flexibly adapt to a vast number of environmental challenges- both physical and social, while at the same time continuing to evolve and grow its intelligence at the individual, group and species level.
If only it was possible to harness this most complex object in the universe, to manage our own still-primitive, nascent organisational structures.
So what’s the secret to the brain’s incredible success in guiding the human race through its evolutionary odyssey? Well finally the creativity and perseverance of countless dedicated scientists is starting to pay dividends, with two recent major conceptual breakthroughs-
A Unified Theory of the Brain and the key to the Sub-conscious Brain.
Current theories of the mind and brain have primarily focussed on defining the mental behaviour of others using the brain’s mirror neurons. These are a set of specialized cells that fire when an animal observes an action performed by another. Therefore, the neurons ‘mirror’ or reflect the behaviour of the other, as though the observer was itself acting. Such neurons have been directly observed in primates and more recently humans and are believed to exist in other species, such as birds.
However despite an increasing understanding of the role of such mechanisms in shaping the evolution of the brain, current theories have failed to provide an overarching or unified framework, linking all mental and physical processes- until recently. A group of researchers from the University College London headed by neuroscientist Karl Friston, have now derived a mathematical framework that provides a credible basis for such a holistic theory.
This is based on Bayesian probability theory, which allows predictions to be made about the validity of a proposition or phenomenon based on the evidence available. Friston’s hypothesis builds on an existing theory known as the “Bayesian Brain”, which postulates the brain as a probability machine that constantly updates its predictions about its environment based on its perception, memory and computational capacity. In other words it is constantly learning about its place in the world by filtering input knowledge through a statistical assessment process.
The crucial element in play, is that these encoded probabilities are based on cumulative experience or evidence, which is updated whenever additional relevant data becomes available; such as visual information about an object’s location or behaviour. Friston’s theory is therefore based on the brain as an inferential agent, continuously refining and optimising its model of the past, present and future.
This can be seen as a generic process applied to all functions and protocols embedded in the brain; continually adapting the internal state of its myriad neural connections, as it learns from its experience. In the process it attempts to minimise the gap between its predictions and the actual state of the external environment on which its survival depends.
Minimising this gap or prediction error is crucial and can be measured in terms of the concept of ‘free energy’ used in thermodynamics and statistical mechanics. This is defined as the amount of useful work that can be extracted from a system such as an engine and is roughly equivalent to the difference between the total energy provided by the system and its waste energy or entropy. In this case the prediction error is equated to the free energy of the system, which must be minimised as far as practical if the organism is to continue to develop.
All functions of the brain have therefore evolved to reduce predictive errors to enhance the learning process. When the predictions are right, the brain is rewarded by being able to respond more efficiently and effectively, using less energy. If it is wrong, additional energy is required to find out why and formulate a better set of predictions.
The second breakthrough has come from a better understanding, again through neuro-imaging, of the brain’s subconscious processes. It’s been revealed that the brain is incredibly active, even when a person is not purposely thinking or acting, for example when daydreaming or asleep. It is in fact keeping subliminal watch, communicating, synchronising and prepping its networks for a conscious future action or response; continuously organising and refining its neural systems such as the cortex and memory; in the process using up to twenty times as much energy as the conscious mode of operation requires. This mechanism is called the brain’s default mode network or DMN and has only been recently recognised as a cogent system in its own right.
Now fast forward to the future enterprise, running under an architecture that incorporates these two knowledge breakthroughs. What are the additional benefits over the old model? Not too difficult to deduce.
Any organisation that is capable of constantly and seamlessly monitoring itself in relation to its internal functions and external environment; assessing its performance against its predictions and requirements in real-time through efficient feedback mechanisms; being aware of changes in its environment and opportunities to improve its performance and productivity; self-optimising its functions and goals; self-correcting its actions, searching autonomously for the best solutions for performing complex decision-making and constantly building on its experience and intelligence – must mark a vast improvement over the current model.
Not only that- this model has been tested and operationally proven in the cauldron of evolution over the past 5 billion years. Not a bad benchmark!
Too difficult to introduce into mainstream enterprise operations? I don’t think so, not in an era when we can build the world wide web, space-stations, large particle colliders, models of galaxies and the multiverse, apply genetic engineering techniques to solve diseases, grow new organs from stem cells and put a man on Mars!
Enterprise IT is beginning to dip its toe in the water and borrow some of its inspiration from biological models. For example, a number of the most valuable AI techniques routinely applied in business- genetic algorithms, neutral networks, DNA and swarm computation, are biologically based, as is the concept of the organisation as a complex ecosystem, rather than a rigid hierarchical structure, largely disconnected from its environment.
Networks are also getting a look-in. Complex decision-making, using elements of autonomous, self-organising and intelligent networks, incorporating complex feedback loops to monitor operational performance and enhance relationships with customers and suppliers, are now being trialled.
But the current enterprise management model is still missing the big picture- the shift towards an efficient, self-regulating, self-organising, self-evolving framework, so critical for survival in a future fast-moving, uncertain physical and social environment.
The most efficient blueprint for such an architecture and one honed over billions of years and governing all animal life, is the living brain; in particular the advanced human brain.
For the last thirty years, since the advent of computerised imaging techniques, scientists have been trying to prise open the secrets of the brain’s incredible power and flexibility. Not just how it computes so efficiently, but its ability to adapt, evolve and manage its 100 billion neurons and dozens of specialised structures, as well as all the relationships of the body’s incredibly rich cellular processes, organs and bio-systems. It has also mastered the capacity to flexibly adapt to a vast number of environmental challenges- both physical and social, while at the same time continuing to evolve and grow its intelligence at the individual, group and species level.
If only it was possible to harness this most complex object in the universe, to manage our own still-primitive, nascent organisational structures.
So what’s the secret to the brain’s incredible success in guiding the human race through its evolutionary odyssey? Well finally the creativity and perseverance of countless dedicated scientists is starting to pay dividends, with two recent major conceptual breakthroughs-
A Unified Theory of the Brain and the key to the Sub-conscious Brain.
Current theories of the mind and brain have primarily focussed on defining the mental behaviour of others using the brain’s mirror neurons. These are a set of specialized cells that fire when an animal observes an action performed by another. Therefore, the neurons ‘mirror’ or reflect the behaviour of the other, as though the observer was itself acting. Such neurons have been directly observed in primates and more recently humans and are believed to exist in other species, such as birds.
However despite an increasing understanding of the role of such mechanisms in shaping the evolution of the brain, current theories have failed to provide an overarching or unified framework, linking all mental and physical processes- until recently. A group of researchers from the University College London headed by neuroscientist Karl Friston, have now derived a mathematical framework that provides a credible basis for such a holistic theory.
This is based on Bayesian probability theory, which allows predictions to be made about the validity of a proposition or phenomenon based on the evidence available. Friston’s hypothesis builds on an existing theory known as the “Bayesian Brain”, which postulates the brain as a probability machine that constantly updates its predictions about its environment based on its perception, memory and computational capacity. In other words it is constantly learning about its place in the world by filtering input knowledge through a statistical assessment process.
The crucial element in play, is that these encoded probabilities are based on cumulative experience or evidence, which is updated whenever additional relevant data becomes available; such as visual information about an object’s location or behaviour. Friston’s theory is therefore based on the brain as an inferential agent, continuously refining and optimising its model of the past, present and future.
This can be seen as a generic process applied to all functions and protocols embedded in the brain; continually adapting the internal state of its myriad neural connections, as it learns from its experience. In the process it attempts to minimise the gap between its predictions and the actual state of the external environment on which its survival depends.
Minimising this gap or prediction error is crucial and can be measured in terms of the concept of ‘free energy’ used in thermodynamics and statistical mechanics. This is defined as the amount of useful work that can be extracted from a system such as an engine and is roughly equivalent to the difference between the total energy provided by the system and its waste energy or entropy. In this case the prediction error is equated to the free energy of the system, which must be minimised as far as practical if the organism is to continue to develop.
All functions of the brain have therefore evolved to reduce predictive errors to enhance the learning process. When the predictions are right, the brain is rewarded by being able to respond more efficiently and effectively, using less energy. If it is wrong, additional energy is required to find out why and formulate a better set of predictions.
The second breakthrough has come from a better understanding, again through neuro-imaging, of the brain’s subconscious processes. It’s been revealed that the brain is incredibly active, even when a person is not purposely thinking or acting, for example when daydreaming or asleep. It is in fact keeping subliminal watch, communicating, synchronising and prepping its networks for a conscious future action or response; continuously organising and refining its neural systems such as the cortex and memory; in the process using up to twenty times as much energy as the conscious mode of operation requires. This mechanism is called the brain’s default mode network or DMN and has only been recently recognised as a cogent system in its own right.
Now fast forward to the future enterprise, running under an architecture that incorporates these two knowledge breakthroughs. What are the additional benefits over the old model? Not too difficult to deduce.
Any organisation that is capable of constantly and seamlessly monitoring itself in relation to its internal functions and external environment; assessing its performance against its predictions and requirements in real-time through efficient feedback mechanisms; being aware of changes in its environment and opportunities to improve its performance and productivity; self-optimising its functions and goals; self-correcting its actions, searching autonomously for the best solutions for performing complex decision-making and constantly building on its experience and intelligence – must mark a vast improvement over the current model.
Not only that- this model has been tested and operationally proven in the cauldron of evolution over the past 5 billion years. Not a bad benchmark!
Too difficult to introduce into mainstream enterprise operations? I don’t think so, not in an era when we can build the world wide web, space-stations, large particle colliders, models of galaxies and the multiverse, apply genetic engineering techniques to solve diseases, grow new organs from stem cells and put a man on Mars!
Monday, April 12, 2010
Future Enterprise- Rebirthing Hal
The arrival of super smart evolutionary computers, capable of autonomous reasoning, learning and emulating the human-like behaviour of the mythical HAL in Arthur C. Clarke’s Space Odyssey 2001 is imminent.
The Darwinian evolutionary paradigm has finally come of age in the era of super -computing. The AI evolutionary algorithm which now guides many problem solving and optimisation processes, is also being applied to the design of increasingly sophisticated computing systems. In a real sense, the evolutionary paradigm is guiding the design of evolutionary computing, which in turn will lead to the development of more powerful evolutionary algorithms. This process will inevitably lead to the generation of hyper-smart computing systems and therefore advanced knowledge; with each evolutionary computing advance catalysing the next in a fractal process.
Evolutionary design principles have been applied in all branches of science and technology for over a decade, including the development of advanced electronic hardware and software, now incorporated in personal computing devices and robotic controllers.
One of the first applications to use a standard genetic algorithm was the design of an electronic circuit which could discriminate between two tone signals or voices in a crowded room. This was achieved by using a Field Programmable Gateway Array or FPGA chip, on which a matrix of transistors or logic cells was reprogrammed on the fly in real time. Each new design configuration was varied or mutated and could then be immediately tested for its ability to achieve the desired output- discriminating between the two signal frequencies.
Such evolutionary-based technologies provide the potential to not only optimise the design of computers, but facilitate the evolution of self-organisational learning and replicating systems that design themselves. Eventually it will be possible to evolve truly intelligent machines that can learn on their own, without relying on pre-coded human expertise or knowledge.
In the late forties, John von Neumann conceptualised a self-replicating computer using a cellular automaton architecture of identical computing devices arranged in a chequerboard pattern, changing their states based on their nearest neighbour. One of the earliest examples was the Firefly machine with 54 cells controlled by circuits which evolved to flash on and off in unison.
The evolvable hardware that researchers created in the late 90’s and early this century was proof of principle of the potential ahead. For example, a group of Swiss researchers extended Von Neumann's dream by creating a self-repairing, self-duplicating version of a specialised computer. In this model, each processor cell or biomodule was programmed with an artificial chromosome, encapsulating all the information needed to function together as one computer and capable of exchanging information with other cells. As with each biological cell, only certain simulated genes were switched on to differentiate its function within the body.
A stunning example of the application of Darwinian principles to the mimicking of life was development of the CAM-Cellular Automata Machine Brain in 2000. It contained 40 million neurons, running on 72 linked FGPAs of 450 million autonomous cells. Also the first hyper-computer- HAL-4rw1 from Star Bridge Systems reached commercial production in 2000. Based on FPGA technology it operated at four times the speed of the world's fastest supercomputer.
And at the same time NASA began to create a new generation of small intelligent robots called ‘biomorphic’ explorers, designed to react to the environment in similar ways to living creatures on earth.
Another biological approach applied to achieve intelligent computing was the neural network model. Such networks simulate the firing patterns of neural cells in the brain, which accumulate incoming signals until a discharge threshold is reached, allowing information to be transmitted to the next layer of connected cells. However, such digital models cannot accurately capture the subtle firing patterns of real-life cells, which contain elements of both periodic and chaotic timing. However the latest simulations use analogue neuron circuits to capture the information encoded in these time-sensitive patterns and mimic real-life behaviour more accurately.
Neural networks and other forms of biological artificial intelligence are now being combined with evolutionary models, taking a major step towards the goal of artificial cognitive processing; allowing intelligent computing systems to learn on their own and become experts in any chosen field.
Eventually it will be possible to use evolutionary algorithms to design artificial brains, augmenting or supplanting biological human cognition. This is a win-win for humans. While the biological brain, with its tens of billions of neurons each connected to thousands of others, has assisted science to develop useful computational models, a deeper understanding of computation and artificial intelligence is also providing neuroscientists and philosophers with greater insights into the nature of the brain and its cognitive processes.
The future implications of the evolutionary design paradigm are therefore enormous. Universal computer prototypes capable of continuous learning are now reaching commercial production. Descendants of these systems will continue to evolve, simulating biological evolution through genetic mutation and optimisation, powered by quantum computing. They will soon create capabilities similar to those of HAL in Arthur Clarke's "Space Odyssey 2001"- and only a few decades later than predicted.
However the reincarnation of the legendary HAL may in fact be realised by a much more powerful phenomena incorporating all current computing and AI advances - the Intelligent World Wide Web. As previously discussed, this multidimensional network of networks, empowered by human and artificial intelligence and utilising unlimited computing and communication power, is well on the way to becoming a self-aware entity and the ultimate decision partner in our world.
Perhaps HAL is already alive and well.
The Darwinian evolutionary paradigm has finally come of age in the era of super -computing. The AI evolutionary algorithm which now guides many problem solving and optimisation processes, is also being applied to the design of increasingly sophisticated computing systems. In a real sense, the evolutionary paradigm is guiding the design of evolutionary computing, which in turn will lead to the development of more powerful evolutionary algorithms. This process will inevitably lead to the generation of hyper-smart computing systems and therefore advanced knowledge; with each evolutionary computing advance catalysing the next in a fractal process.
Evolutionary design principles have been applied in all branches of science and technology for over a decade, including the development of advanced electronic hardware and software, now incorporated in personal computing devices and robotic controllers.
One of the first applications to use a standard genetic algorithm was the design of an electronic circuit which could discriminate between two tone signals or voices in a crowded room. This was achieved by using a Field Programmable Gateway Array or FPGA chip, on which a matrix of transistors or logic cells was reprogrammed on the fly in real time. Each new design configuration was varied or mutated and could then be immediately tested for its ability to achieve the desired output- discriminating between the two signal frequencies.
Such evolutionary-based technologies provide the potential to not only optimise the design of computers, but facilitate the evolution of self-organisational learning and replicating systems that design themselves. Eventually it will be possible to evolve truly intelligent machines that can learn on their own, without relying on pre-coded human expertise or knowledge.
In the late forties, John von Neumann conceptualised a self-replicating computer using a cellular automaton architecture of identical computing devices arranged in a chequerboard pattern, changing their states based on their nearest neighbour. One of the earliest examples was the Firefly machine with 54 cells controlled by circuits which evolved to flash on and off in unison.
The evolvable hardware that researchers created in the late 90’s and early this century was proof of principle of the potential ahead. For example, a group of Swiss researchers extended Von Neumann's dream by creating a self-repairing, self-duplicating version of a specialised computer. In this model, each processor cell or biomodule was programmed with an artificial chromosome, encapsulating all the information needed to function together as one computer and capable of exchanging information with other cells. As with each biological cell, only certain simulated genes were switched on to differentiate its function within the body.
A stunning example of the application of Darwinian principles to the mimicking of life was development of the CAM-Cellular Automata Machine Brain in 2000. It contained 40 million neurons, running on 72 linked FGPAs of 450 million autonomous cells. Also the first hyper-computer- HAL-4rw1 from Star Bridge Systems reached commercial production in 2000. Based on FPGA technology it operated at four times the speed of the world's fastest supercomputer.
And at the same time NASA began to create a new generation of small intelligent robots called ‘biomorphic’ explorers, designed to react to the environment in similar ways to living creatures on earth.
Another biological approach applied to achieve intelligent computing was the neural network model. Such networks simulate the firing patterns of neural cells in the brain, which accumulate incoming signals until a discharge threshold is reached, allowing information to be transmitted to the next layer of connected cells. However, such digital models cannot accurately capture the subtle firing patterns of real-life cells, which contain elements of both periodic and chaotic timing. However the latest simulations use analogue neuron circuits to capture the information encoded in these time-sensitive patterns and mimic real-life behaviour more accurately.
Neural networks and other forms of biological artificial intelligence are now being combined with evolutionary models, taking a major step towards the goal of artificial cognitive processing; allowing intelligent computing systems to learn on their own and become experts in any chosen field.
Eventually it will be possible to use evolutionary algorithms to design artificial brains, augmenting or supplanting biological human cognition. This is a win-win for humans. While the biological brain, with its tens of billions of neurons each connected to thousands of others, has assisted science to develop useful computational models, a deeper understanding of computation and artificial intelligence is also providing neuroscientists and philosophers with greater insights into the nature of the brain and its cognitive processes.
The future implications of the evolutionary design paradigm are therefore enormous. Universal computer prototypes capable of continuous learning are now reaching commercial production. Descendants of these systems will continue to evolve, simulating biological evolution through genetic mutation and optimisation, powered by quantum computing. They will soon create capabilities similar to those of HAL in Arthur Clarke's "Space Odyssey 2001"- and only a few decades later than predicted.
However the reincarnation of the legendary HAL may in fact be realised by a much more powerful phenomena incorporating all current computing and AI advances - the Intelligent World Wide Web. As previously discussed, this multidimensional network of networks, empowered by human and artificial intelligence and utilising unlimited computing and communication power, is well on the way to becoming a self-aware entity and the ultimate decision partner in our world.
Perhaps HAL is already alive and well.
Subscribe to:
Comments (Atom)