Location, Location, Location

In real estate, there is a saying: the three most important factors are Location, Location, and Location. For some classes of building systems, these same three factors should be the most important factors in choosing systems. Either this requires lots of tight integration, requiring lots of time and expense, or it requires interoperability.

One of the benefits of interoperability is to support diversity. There are many reasons to have a diverse set of systems in each house. Some people will want to choose the best of the breed. Some people will have different tastes. And sometimes, especially as each site contains a mix of generation storage and systems to supplement the grid as well as its array of building systems, diversity will support the special needs of the location.

My mother told me tonight of a system her father’s friends had in the high Sierras. The remote location made fuel difficult to bring in. The alpine terrain limited the use of geothermal solutions. The extremes of heat and cold in the high desert made climate control daunting.

Despite these problems, the house was warm or cold as desired. The site specific system provided this service economically. The secret of the site-specific system was…the swimming pool.

The swimming pool served as the heat sink for the house. Tucked deep into the mountain side by the house, the pool maintained a temperature mediated by contact with the soil far below the frost line. The pool kept the house warm in winter and cool in summer.

As we build transacted energy systems interacting with agents at the level of every house, a pool-based climate control system should fit right in. The agent should see what is merely heating and cooling; in the same way, my laptop sees a thumb-drive and a disk drive as identical. All internal actions should be abstracted up to simple interface that does not know the internal details.

This will let me choose the best of breed agents to run my house with its best of breed systems. Those systems will be different from what someone else chooses as best of breed because people have different criteria. One of those criteria will be the location of the building and its site-specific needs.

Abstract interfaces create interoperability. Interoperability lets you choose the systems you want to fit the needs of the site you have. Interoperability will let systems compete based upon performance with the location, and meeting the needs of the house.

Abstraction creates interoperability. Interoperability enables markets. Markets drive diversity. Diversity offers choice. Choice drives competition. Competition drives innovation.

Each of these options should be at the system level, not at the component. Because systems offer service. And service is where competition is best.

Artificially Intelligent Grid?

I’ve been thinking for a while that most Artificial intelligence attempts get one big wrong. They design single purpose systems that do one thing well, but do not have other aspects to their behavior. Neuroscientists often do the same thing, carefully noodling out the mechanisms and structures that support a single purpose.

Neuroscience is often advanced by war, and by people who suffer some sort of brain injury. This may take out an entire cognitive function, but the personality, and consciousness, while deformed, remains. So when is it that a system exhibits intelligent behavior. It may not be when enough low level programs are written, but rather when enough service oriented systems are amalgamated.

Recently I have been reading some background economic theory from Lynne Kiesling ( www.knowledgeproblem.com ). In her introduction, she leads the reader through the definition of a standard markets as complex adaptive systems. Complex adaptive systems have large numbers of diverse agents that interact. Each agent reacts to the actions of the other agents and to changes in environment. Agents are autonomous, using distributed control and decentralized decision making, Eventually, the dominant interaction becomes the agents interacting with the system environment that was itself created by the agents’ own independent decision making.

The market pattern results in emergent self organization, in which a large scale pattern emerges out of the smaller decisions and interactions. The emergent pattern is not imposed top-down, but rather arises decentralized agents interacting within bounds of distributed control (or self control if you will).

Another characteristic of such markets is resilience in the face of change, what the economists call adaptive capacity. This is of course a key element of intelligence.

For an old brain chemistry dude, this description of complex adaptive systems sounds a whole lot more like the proper model for intelligence and consciousness than do many of the reductive neuroscience models, let alone the AI approach. It clearly is closely aligned with the principles and language of embryology. Any number of gee-whiz articles since the sequencing of the human genome have explained that “it is really not a blue-print, but an organizing principle”. Emergent self organization is a pretty good description of how the body organizes itself, actually.

We’ve been talking about using building system-based agents as players in emerging energy markets. But now I’m wondering. Are we defining an ecosystem of agents that will be self-organizing, irrespective of the economics? Is it mandatory that we have a multiplicity of agents, to offer us resilience rather than stampedes during a crisis? Should we think of building services and efficient energy use as the tropisms these agents follow?

What if we’ve finally found the path to Artificial Intelligence…

Getting from Registers to Ontologies

Control programming today is like writing device drivers. Internal to a computer, low level programming is about moving data in and out of internal registers. Control system programming is, for the most part, reading and setting remote points. oBIX 1.0, first and foremost, provides point services for setting, reading, and tracking remote control systems. By defining a web-services based pattern for accessing the point service, control systems have been made accessible to enterprise systems and to enterprise programmers. Point services are not, however, enterprise friendly.

In almost all creation myths, the first task of man is the naming of things. We call formal rules for naming of things “semantics”. The next task for oBIX is to move to formal semantics of embedded systems. There are three approaches we could follow for semantics: tagging, system, and service.

Tagging is the most traditional naming for control systems. Tags are merely the naming of each point. Tags may appear on the initial schematic diagram of the control system. There may be some sort of internal logic to tagging. CWCRT007 may be the Chilled Water Coil Return Temperature #7. I might just as easily use the tag CWC007RT for the Chilled Water Coil 7 return temperature. The control system integrator assigns tags within control systems. If I am lucky, the integrator working on the third floor will use a naming convention compatible with that used by the man working on the sixth floor. If I am an advanced owner, I might have specified the standard to be used pre-construction. Tag standards such as these do little to help the enterprise work with multiple buildings.

System based semantics will name things by the system they are part of. This approach aligns well with the data life cycle defined by NBIMS (National Building Information Model Standard), especially if the contractor uses COBIE (Common Operation Building Information Exchange) to hand over NBIIMS information to maintenance and operations. Each system gets the same name it had on the initial design documents. One problem with this approach is that most design documents have significant errors and duplication in the controls portions. These names, while useful to those performing maintenance on the building, they often have little to do with how the tenants see the building, and thus may be difficult for enterprise programmers to use.

Service-based semantics name systems for what they do, not what they are made of. Service-based semantics may be mapped to the spaces they support, i.e., “Heating and Cooling for Big Conference Room”. This makes it easy to link business processes with building processes; we can easily imagine inviting the heating and cooling system to the big meeting. It may require additional maintenance, as the C-level executive’s office, however critical, may move from one room to another.

Ontologies are the next step above semantics. Ontologies are the classifications that semantics fit into. A computer-based ontology would enable a computer to fit a system into one or several hierarchies of meaning. To illustrate, consider a room in which a cat is playing. I ask the computer, “Are there any animals present?” Using semantics, a cat is not an animal, and the answer is “No”. Using an ontology, the system considers that a Cat is a type of Pet and a type of Mammal. A Mammal is recognized as a type of Animal. Now the computer can answer, correctly, yes.

Today, we talk of the interactive web, and call it Web 2.0. Web 2.0 is interactive and responsive in ways that the initial internet was not. Small point services add increased functionality such as the type-ahead and spell-check functions in Gmail. Current discussions of the future of the internet imagine systems being able to negotiate with multiple remote web sites to increase function and responsiveness. These functions may include discovering new remote service on the fly to respond to user or system requests. These new functions will require that systems be able to recognize and understand the services provided by remote systems. The basis of Web 3.0 will be the formal ontological classification of web services.

Service-based semantics provide a better basis for ontologies than do system-based or tag based semantics. A single system may provide more than one service. Each service may be linked to multiple chains of ontology, just as the cat above is linked to both the “Pet” and the “Mammal” ontological hierarchies. A single service may be linked to both an external standards-based ontology and an internal organization-based ontology.

All of this sounds at first hearing as a bit of a stretch, but in the near time, it will become the basis of what we expect from all system integrations. Interested readers may wish to check out Ontolog ( http://ontolog.cim3.net/ ), a Web 2.0 home for those exploring how to find meaning (ontology) in engineered systems.

Corporate Transparency and Energy Boondoggles.

There is an abstract interface already in place that most of us understand. It communicates scarcity and abundance. It allocates resources over time. It relays the comparative worth of alternate solutions. We call the interface “the economy” and we call the abstractions “money”.

When we interfere with the prices in the economy, we are deliberately miscommunicating value and scarcity. When we deliberately falsify communications, people will make the wrong decisions. Those decisions will be bad for the economy, bad for resource allocation, and bad, in the long term, for the person making those decisions.

I wrote last week (other-peoples-money-and-poor-decisions.html) of a proposed program in Wilmington, Delaware to offer reduced electrical rates for 5 years to new businesses setting up shop in the area. Each of these businesses is wasting time by putting off developing new business processes that use less energy. These businesses have self-identified themselves as being among those most in need of new processes by relocating to take advantage of the deal.

As a country, we have dedicated a lot of effort toward transparency in corporate governance. Many of the rules are bound in the Sarbanes-Oxley (SarBox) legislation and the resulting regulation. The aim of these rules is to require corporate officers to take responsibility for full and honest reporting of business practices and potential liabilities. The spirit, if not the letter, of the law should require these businesses to report their strategic failures and the anticipated costs of ignoring true energy pricing.

Most likely, the corporate officers will award themselves bonuses for short term profit goals, and be long gone before the costs of their decisions become visible. The activists who might be expected to protest will be blinded by the fulfillment of their progressive fantasies of job creation. And all of us will pay in continued high energy use, subsidized by the muddy thinkers of Wilmington, Delaware. Bad information leads to bad actions. Fully transparent pricing for electricity is the basis upon which good decisions can be made.

Technology has given us an historic opportunity for transparent energy markets. The Energy Bill of 2005 has spread the enabling mechanism of time of day metering across the country. We can now apply the most commonly accepted abstract interface to time of day allocation of energy resources. We should get past politics as usual to take advantage of it.