In case you don’t know, there is something called agent-oriented software engineering (AOSE). Many people are doing research in this field, books and papers are being written and conferences run on this topic.
AOSE revolves around the idea of software systems as collections of agents, which are often described as autonomous, pro-active, situated entities. Each agent has its own goals (its private agenda, so to speak), and it is up to it do accept or reject a request to perform an action depending on whether or not it believes it will help it achieve its private goals. Usually, agents must interact with each other in order to achieve a system goal, and systems of this kind are often called multi-agent systems (MAS). Internally, agents are often described as having beliefs, desires and intentions (this is the BDI model of agent internals).
Some authors have stated that AOSE is the next step beyond objects. Object-oriented software engineering will soon be superseded by agents. Other, more cautious authors say that agents and objects can co-exist and most systems in the future will be composed of agents plus objects. Some authors believe that agents are just “big intelligent” objects while other authors think that agents and objects are not related whatsoever. Some authors have built and published agent-oriented software development methodologies, i.e. methodologies intended for the development of MAS. They claim that existing methodologies (most of them self-called object-oriented) are not valid for MAS development and therefore a new generation of methodologies is necessary. Some of the authors that are involved in AOSE come from a software engineering background, often in the object-oriented field. Some others come from an artificial intelligence background, supposedly the cradle of agents.
I have become heavily involved with AOSE in the last months. Since my background is in object-oriented software development methodologies, I approach the agent world from a very different perspective to what an artifical intelligence person would do. This is good and bad. At the same time, I am skeptical by nature, and tend to question nearly everything. This can lead to time wasted in revising things that are OK, but sometimes it leads to finding new solutions to old problems that would otherwise have passed unnoticed. When I got involved with AOSE some months ago, I started questioning everything. Rocking the boat, so to speak.
I will explain here two of the biggest issues I found with AOSE. The first one is related to the concept of agent-orientation and the term “agent-oriented”. Quite evidently, this term has been coined after “object-oriented”, and supposedly tries to convey that a paradigm exists in which the basic conceptual building blocks are agent-related. In the object-oriented paradigm, the basic building blocks are Object, Class, Attribute, Operation, etc. Similarly, in the new agent-oriented paradigm, the basic building blocks would probably be Agent, Role, Belief, Desire, Intention, Message, etc. The parallelism seems to work. However, when you try to implement an agent-oriented system using the agent-oriented building blocks, you inevitable find that, at some point, the paradigm is not enough. Additional concepts are necessary to proceed beyond a quite high-level description of a system. This is, by the way, confirmed by both industry and academe. Invariably, agent proponents, at some point, revert to using concepts from the object-oriented paradigm to specify MAS. Good examples are AOSE methodologies such as Gaia, Tropos or Prometheus or agent-oriented “programming languages” such as JACK or JADE. It seems to me that agent-orientation is not a paradigm then. It looks more like a highly specialised collection of concepts that is extremely good for some modelling problems, but unusable for other uses.
I will give you an example. Some years ago, some people writing compilers would think of a compiler as inevitably built around a pipelined architecture, in which a stream of data is sequentially transformed by a number of different processes (parsing, AST generation, AST decoration, optimisation, code generation, etc.). This community developed a highly specialised language that was optimised to deal with the modelling and specification of compilers. This language included concepts such as Stream, AST, Process, Instruction, etc. When they modelled a new compiler system, they would use this language and express their models using these concepts. Of course, nobody would use these concepts to express a video-shop management system or an operating system. The language was clearly “compiler-oriented”. At some point in the modelling activity, specifications would need to be translated to some lower-level paradigm so it could be implemented. For example, compiler models (expressed in terms of streams, ASTs and processes) would be translated into software models expressed in traditional terms of classes, methods and variables (assuming an object-oriented, C++-ish approach, which was common). It seemed like the object-oriented paradigm was underpinning everything but, at high levels of abstraction, using a compiler-oriented conceptual set was highly convenient.
I can establish many similarities between this compiler-oriented conceptual set and the emerging agent-oriented “paradigm”. My feeling is that AOSE is not a paradigm. Agents are not the next step beyond objects. AOSE is simply a highly specialised conceptual set that is optimised for describing systems of certain kinds at high levels of abstraction. AOSE is not a paradigm because it is severly constrained with regard to (a) application domain scope and (b) abstraction level. An operating system cannot be successfully developed by using agents. A MAS cannot be successfully implemented, tested and run by using agents. You need something else. A true paradigm would not be constrained in scope or abstraction level.
My second issue is with the extensive use that AOSE literature makes of what they call “mentalistic” terms. They talk about beliefs, intentions, desires, survival, competition, reasoning. They even use terms such as “social” and “inteligent”. I can only think of three possibilities here.
- AOSE literature uses these terms literally. In this case, I don’t believe it. “Intelligent” agents are not more inteligent than any object or database record. Their “social” capability does not incorporate true social traits such as those described by sociologists and anthropologists.
- These terms are used with new semantics. For example, “intelligence” means the capability of processing symbolic information at high speed. This is not what “intelligence” really means, but one could argue that words are just arbitrary symbols and anybody is free to redefine them as they see fit as long as they are consistent. If this is the case, I have to say that (a) I haven’t seen any new definition of these terms, let alone a consistent one, and (b) it is really confusing to use well-known words with a new meaning!
- These terms are used metaphorically. When an AOSE paper says “agent A has desires” they really mean that agent A stores some information that plays, to certain extent, a similar role to what related information would play in a human being. If this is the case, I need to say that metaphor is fine for poetry or fiction, but not for scientific work.
So, I’ve no idea. I try to avoid these terms when I write scientifically about AOSE, but they pervade the AOSE literature so much that they are becoming a problem. Since no consistent definitions exist, different authors understand and use them differently, and people (like me) coming from other backgrounds just freak out when they see them.
I think there is a lot to be done in AOSE. As a niche conceptual set, it seems to have plenty of applications and, although most works to date look more like solutions in search of a problem, I think that this is an exciting field to work in. I will keep you posted.