Re-thinking things

Artificially Intelligent Grid?

I’ve been thinking for a while that most Artificial intelligence attempts get one big wrong. They design single purpose systems that do one thing well, but do not have other aspects to their behavior. Neuroscientists often do the same thing, carefully noodling out the mechanisms and structures that support a single purpose.

Neuroscience is often advanced by war, and by people who suffer some sort of brain injury. This may take out an entire cognitive function, but the personality, and consciousness, while deformed, remains. So when is it that a system exhibits intelligent behavior. It may not be when enough low level programs are written, but rather when enough service oriented systems are amalgamated.

Recently I have been reading some background economic theory from Lynne Kiesling ( www.knowledgeproblem.com ). In her introduction, she leads the reader through the definition of a standard markets as complex adaptive systems. Complex adaptive systems have large numbers of diverse agents that interact. Each agent reacts to the actions of the other agents and to changes in environment. Agents are autonomous, using distributed control and decentralized decision making, Eventually, the dominant interaction becomes the agents interacting with the system environment that was itself created by the agents’ own independent decision making.

The market pattern results in emergent self organization, in which a large scale pattern emerges out of the smaller decisions and interactions. The emergent pattern is not imposed top-down, but rather arises decentralized agents interacting within bounds of distributed control (or self control if you will).

Another characteristic of such markets is resilience in the face of change, what the economists call adaptive capacity. This is of course a key element of intelligence.

For an old brain chemistry dude, this description of complex adaptive systems sounds a whole lot more like the proper model for intelligence and consciousness than do many of the reductive neuroscience models, let alone the AI approach. It clearly is closely aligned with the principles and language of embryology. Any number of gee-whiz articles since the sequencing of the human genome have explained that “it is really not a blue-print, but an organizing principle”. Emergent self organization is a pretty good description of how the body organizes itself, actually.

We’ve been talking about using building system-based agents as players in emerging energy markets. But now I’m wondering. Are we defining an ecosystem of agents that will be self-organizing, irrespective of the economics? Is it mandatory that we have a multiplicity of agents, to offer us resilience rather than stampedes during a crisis? Should we think of building services and efficient energy use as the tropisms these agents follow?

What if we’ve finally found the path to Artificial Intelligence…

Getting from Registers to Ontologies

Control programming today is like writing device drivers. Internal to a computer, low level programming is about moving data in and out of internal registers. Control system programming is, for the most part, reading and setting remote points. oBIX 1.0, first and foremost, provides point services for setting, reading, and tracking remote control systems. By defining a web-services based pattern for accessing the point service, control systems have been made accessible to enterprise systems and to enterprise programmers. Point services are not, however, enterprise friendly.

In almost all creation myths, the first task of man is the naming of things. We call formal rules for naming of things “semantics”. The next task for oBIX is to move to formal semantics of embedded systems. There are three approaches we could follow for semantics: tagging, system, and service.

Tagging is the most traditional naming for control systems. Tags are merely the naming of each point. Tags may appear on the initial schematic diagram of the control system. There may be some sort of internal logic to tagging. CWCRT007 may be the Chilled Water Coil Return Temperature #7. I might just as easily use the tag CWC007RT for the Chilled Water Coil 7 return temperature. The control system integrator assigns tags within control systems. If I am lucky, the integrator working on the third floor will use a naming convention compatible with that used by the man working on the sixth floor. If I am an advanced owner, I might have specified the standard to be used pre-construction. Tag standards such as these do little to help the enterprise work with multiple buildings.

System based semantics will name things by the system they are part of. This approach aligns well with the data life cycle defined by NBIMS (National Building Information Model Standard), especially if the contractor uses COBIE (Common Operation Building Information Exchange) to hand over NBIIMS information to maintenance and operations. Each system gets the same name it had on the initial design documents. One problem with this approach is that most design documents have significant errors and duplication in the controls portions. These names, while useful to those performing maintenance on the building, they often have little to do with how the tenants see the building, and thus may be difficult for enterprise programmers to use.

Service-based semantics name systems for what they do, not what they are made of. Service-based semantics may be mapped to the spaces they support, i.e., “Heating and Cooling for Big Conference Room”. This makes it easy to link business processes with building processes; we can easily imagine inviting the heating and cooling system to the big meeting. It may require additional maintenance, as the C-level executive’s office, however critical, may move from one room to another.

Ontologies are the next step above semantics. Ontologies are the classifications that semantics fit into. A computer-based ontology would enable a computer to fit a system into one or several hierarchies of meaning. To illustrate, consider a room in which a cat is playing. I ask the computer, “Are there any animals present?” Using semantics, a cat is not an animal, and the answer is “No”. Using an ontology, the system considers that a Cat is a type of Pet and a type of Mammal. A Mammal is recognized as a type of Animal. Now the computer can answer, correctly, yes.

Today, we talk of the interactive web, and call it Web 2.0. Web 2.0 is interactive and responsive in ways that the initial internet was not. Small point services add increased functionality such as the type-ahead and spell-check functions in Gmail. Current discussions of the future of the internet imagine systems being able to negotiate with multiple remote web sites to increase function and responsiveness. These functions may include discovering new remote service on the fly to respond to user or system requests. These new functions will require that systems be able to recognize and understand the services provided by remote systems. The basis of Web 3.0 will be the formal ontological classification of web services.

Service-based semantics provide a better basis for ontologies than do system-based or tag based semantics. A single system may provide more than one service. Each service may be linked to multiple chains of ontology, just as the cat above is linked to both the “Pet” and the “Mammal” ontological hierarchies. A single service may be linked to both an external standards-based ontology and an internal organization-based ontology.

All of this sounds at first hearing as a bit of a stretch, but in the near time, it will become the basis of what we expect from all system integrations. Interested readers may wish to check out Ontolog ( http://ontolog.cim3.net/ ), a Web 2.0 home for those exploring how to find meaning (ontology) in engineered systems.

Transfer Autonomy to the End User

The fundamental problem with most of the “demand limiting” or “load control” programs out there is that they remove autonomy from the end user. We like choice. We like control. We do not like other people to make choices for us. We do not like to cede control to anyone.

All of the energy saving practices that transfer control of our lives to someone else, be it the Power Company or the Government, will have only short-lived support. We want to wash and dry a shirt this afternoon to wear to the party tonight, and we will pay for it. We want to take a long hot soak in the tub this afternoon, either because of a hard day at work, or to ward off an impending cold. We want to be in charge.

Any energy allocation model that ignores these facts about us as a people will fail. It will suffer from non-participation. If regulated, it will be subject to malicious compliance and sabotage. We must build energy allocation models based upon choice.

The micro-circuitry of GridWise allows appliances to identify themselves and report their individual power usage. The appliances must share their capabilities for saving energy with the house. The web services interfaces of oBIX will allow home, office, and third party applications to discover building systems as they do printers. The smart grid will deliver live electricity pricing to the house.

Software agents, working in our behalf, and under our direction, can negotiate power needs with the systems and appliances, and live pricing with the intelligent grid, to most economically meet our desires.

The house must be guided by its inhabitant. You should wash and dry that shirt you want to wear tonight, fully aware of what doing so at the last minute cost you. You should decide whether to follow the economic rules you set up, or to override them to soak in that tub. The decisions of Comfort vs. Economy, of Amenity vs. Cost, should be made explicit.

And the end user must be in charge.

EnergyStar Systems and Data Centers

Data centers consume huge amounts of electricity, much of it wasted. Data centers convert electricity to heat, so all energy used for computing is paired with a similar load for heat removal. Rethinking data centers is a good way to make a strong impact on energy usage in a hurry.

All computers use direct current (DC) to actually run. So does most consumer electronics. That little brick, or wall wart on the power cord transforms power from the alternating current (AC) of the power grid to DC to be used by the computer. In most desktop computers and servers, that “brick” is internal to the computer. Improving this process is straight-forward, and does not require any fundamental re-engineering of the computers.

Recently I was reading that the EPA is proposing higher efficiency standards for power conversion efficiency in computer systems. Most systems today still have not met the current version of these standards, called EnergyStar. What caught my eye was how much power is wasted even in today’s EnergyStar compliant systems. The numbers are so large that they make the case for re-thinking power systems for data centers far stronger than I had thought.

EnergyStar standards require power supplies are that no more than 80% efficient or better. This means that to be compliant, no more than 20% of the A/C power coming to your data center computer be converted to heat and lost before it even gets to the computing circuitry. This lost power is converted to heat before it ever gets to support actual computing.

This increases the arguments for Direct Current (DC) data centers. DC Data Centers convert Alternating Current (AC) power to DC before it is distributed to the servers. Telecommunications has longed used DC distribution for its big racks. There are several processes that can be improved by re-thinking power distribution in data centers around the principle of DC distribution.

All of that power lost by conversion is today heat lost in the data center. That heat must then be removed to keep the computing equipment sufficiently cool. Air conditioning is one of the most significant costs of a operating a data center. Many estimate that it takes up to 1.7 times as much energy to remove heat from conditioned space as the initial energy that generated the heat.

By simple moving the AC/DC conversion outside of the conditioned space of the data center, 20%-40% of the heat is moved out of the data center where it will not need to be air conditioned away.

Many reputable companies sell data center batteries to support uninterrupted power. These usually have AC converted to DC to charge batteries, with the same losses as above. The servers run off batteries. The batteries supply DC, which is converted to AC (5-15% loss of power as heat) to support the AC servers. The power supplies in the servers then convert the AC to DC (as above, with loss of power and generation of heat).

When people discuss the efficiency of this process, they usually describe the efficiency of the battery storage as the limiting factor. What the process above shows, however, that as much as half of the power stored may be lost as heat though the double conversion before it ever gets used for computing.

In a DC data center, the batteries still supply DC power, but all of it goes directly to the servers. Not only does this generate less heat, but it can as much as double the effective efficiency and life of the batteries by removing the double conversion for the last yard of distribution.

This increase of efficiency comes with today’s technologies, without waiting on the perfection of any novel or exotic battery technology.

It is hard to use the waste heat from Air Conditioning. A large AC/DC transformer, however, concentrates the energy lost as heat into one place. It is easy to harvest heat from a single very hot location. I have even seen proposals for fueling a steam distillation chiller off waste heat from a transformer to provide supplemental air conditioning for a data center. You could run domestic hot water heating off the external transformer. I suppose you could even hook a Stirling engine to the transformer and light the building using the waste heat.

We do not have to wait for exotic technologies, although they will come. We need to re-think processes with an awareness of power at each step. Transactive pricing for energy will encourage us to do just that.