Agent-oriented software engineering

In case you don’t know, there is something called agent-oriented software engineering (AOSE). Many people are doing research in this field, books and papers are being written and conferences run on this topic.

AOSE revolves around the idea of software systems as collections of agents, which are often described as autonomous, pro-active, situated entities. Each agent has its own goals (its private agenda, so to speak), and it is up to it do accept or reject a request to perform an action depending on whether or not it believes it will help it achieve its private goals. Usually, agents must interact with each other in order to achieve a system goal, and systems of this kind are often called multi-agent systems (MAS). Internally, agents are often described as having beliefs, desires and intentions (this is the BDI model of agent internals).

Some authors have stated that AOSE is the next step beyond objects. Object-oriented software engineering will soon be superseded by agents. Other, more cautious authors say that agents and objects can co-exist and most systems in the future will be composed of agents plus objects. Some authors believe that agents are just “big intelligent” objects while other authors think that agents and objects are not related whatsoever. Some authors have built and published agent-oriented software development methodologies, i.e. methodologies intended for the development of MAS. They claim that existing methodologies (most of them self-called object-oriented) are not valid for MAS development and therefore a new generation of methodologies is necessary. Some of the authors that are involved in AOSE come from a software engineering background, often in the object-oriented field. Some others come from an artificial intelligence background, supposedly the cradle of agents.

I have become heavily involved with AOSE in the last months. Since my background is in object-oriented software development methodologies, I approach the agent world from a very different perspective to what an artifical intelligence person would do. This is good and bad. At the same time, I am skeptical by nature, and tend to question nearly everything. This can lead to time wasted in revising things that are OK, but sometimes it leads to finding new solutions to old problems that would otherwise have passed unnoticed. When I got involved with AOSE some months ago, I started questioning everything. Rocking the boat, so to speak.

I will explain here two of the biggest issues I found with AOSE. The first one is related to the concept of agent-orientation and the term “agent-oriented”. Quite evidently, this term has been coined after “object-oriented”, and supposedly tries to convey that a paradigm exists in which the basic conceptual building blocks are agent-related. In the object-oriented paradigm, the basic building blocks are Object, Class, Attribute, Operation, etc. Similarly, in the new agent-oriented paradigm, the basic building blocks would probably be Agent, Role, Belief, Desire, Intention, Message, etc. The parallelism seems to work. However, when you try to implement an agent-oriented system using the agent-oriented building blocks, you inevitable find that, at some point, the paradigm is not enough. Additional concepts are necessary to proceed beyond a quite high-level description of a system. This is, by the way, confirmed by both industry and academe. Invariably, agent proponents, at some point, revert to using concepts from the object-oriented paradigm to specify MAS. Good examples are AOSE methodologies such as Gaia, Tropos or Prometheus or agent-oriented “programming languages” such as JACK or JADE. It seems to me that agent-orientation is not a paradigm then. It looks more like a highly specialised collection of concepts that is extremely good for some modelling problems, but unusable for other uses.

I will give you an example. Some years ago, some people writing compilers would think of a compiler as inevitably built around a pipelined architecture, in which a stream of data is sequentially transformed by a number of different processes (parsing, AST generation, AST decoration, optimisation, code generation, etc.). This community developed a highly specialised language that was optimised to deal with the modelling and specification of compilers. This language included concepts such as Stream, AST, Process, Instruction, etc. When they modelled a new compiler system, they would use this language and express their models using these concepts. Of course, nobody would use these concepts to express a video-shop management system or an operating system. The language was clearly “compiler-oriented”. At some point in the modelling activity, specifications would need to be translated to some lower-level paradigm so it could be implemented. For example, compiler models (expressed in terms of streams, ASTs and processes) would be translated into software models expressed in traditional terms of classes, methods and variables (assuming an object-oriented, C++-ish approach, which was common). It seemed like the object-oriented paradigm was underpinning everything but, at high levels of abstraction, using a compiler-oriented conceptual set was highly convenient.

I can establish many similarities between this compiler-oriented conceptual set and the emerging agent-oriented “paradigm”. My feeling is that AOSE is not a paradigm. Agents are not the next step beyond objects. AOSE is simply a highly specialised conceptual set that is optimised for describing systems of certain kinds at high levels of abstraction. AOSE is not a paradigm because it is severly constrained with regard to (a) application domain scope and (b) abstraction level. An operating system cannot be successfully developed by using agents. A MAS cannot be successfully implemented, tested and run by using agents. You need something else. A true paradigm would not be constrained in scope or abstraction level.

My second issue is with the extensive use that AOSE literature makes of what they call “mentalistic” terms. They talk about beliefs, intentions, desires, survival, competition, reasoning. They even use terms such as “social” and “inteligent”. I can only think of three possibilities here.

  1. AOSE literature uses these terms literally. In this case, I don’t believe it. “Intelligent” agents are not more inteligent than any object or database record. Their “social” capability does not incorporate true social traits such as those described by sociologists and anthropologists.
  2. These terms are used with new semantics. For example, “intelligence” means the capability of processing symbolic information at high speed. This is not what “intelligence” really means, but one could argue that words are just arbitrary symbols and anybody is free to redefine them as they see fit as long as they are consistent. If this is the case, I have to say that (a) I haven’t seen any new definition of these terms, let alone a consistent one, and (b) it is really confusing to use well-known words with a new meaning!
  3. These terms are used metaphorically. When an AOSE paper says “agent A has desires” they really mean that agent A stores some information that plays, to certain extent, a similar role to what related information would play in a human being. If this is the case, I need to say that metaphor is fine for poetry or fiction, but not for scientific work.

So, I’ve no idea. I try to avoid these terms when I write scientifically about AOSE, but they pervade the AOSE literature so much that they are becoming a problem. Since no consistent definitions exist, different authors understand and use them differently, and people (like me) coming from other backgrounds just freak out when they see them.

I think there is a lot to be done in AOSE. As a niche conceptual set, it seems to have plenty of applications and, although most works to date look more like solutions in search of a problem, I think that this is an exciting field to work in. I will keep you posted.


4 Responses to “Agent-oriented software engineering”

  1. 1 dcoop-msft 26 January 2005 at 15:37

    1st – It’s horrible that I should have to have a Passport to respond to MSN Places blogs. That defeats some of the transparency of blogs and while it does probably keep blog spam to a minimu, there are better ways (even ones that MSFT is a proponent of). What value is there in a stranger creating a one-time Passport just to post on a blog (other than keeping spam down)? Bah!
    2nd – In IE these blogs are nearly UN-FSKING-READABLE! In IE of all things!!! I’ve cranked my View->Text Size up to Largest and the text is still tiny. In IE! Horrible UX. If there’s a PM watching these blogs, please note this: the experience completely sucks.

    Sorry about that. I’m sorely dissapointed, though. I tried to start my own blog here, but won’t because it’s so hostile to blogs.

    Finally for you and not the MSN folks:
    1. Please give more detail. Spend a several blog posts on it. Break it down into digestable pieces.
    2. What product do you work on?

  2. 2 dcoop-msft 26 January 2005 at 15:46

    3. There SEEMS to be a limit to the number of characters I can use in my comments. WTF?
    4. Spaces team: Are you hiring? I think I can give you some useful test feedback that you don’t seem to have gotten yet.

    Sorry about that again, Cesar.

    Back to you . . .
    3. Why and how do you feel that agents are a step forward from objects? (Didn’t Lisp-ish stuff have agent ideas before there was the concept of objects? I see them as orthoganal.)
    4. Define "inteligent agent". Not "define ‘intelligence’", but "define ‘intelligent agent’" – they’re different (although I used some ugly sophistry on EricGu’s blog recently to make a point).
    5. Avoid using the word "intelligence". Maybe even scrap "intelligent agent". They’re too loaded. You can make something useful and make up your own labels for it.

  3. 3 dcoop-msft 26 January 2005 at 15:53

    Drat – again I’m cut off!
    6. Anthropromorphism is endemic but almost always bad once the tortured analogies start. An analogy is tortued every 34.2 seconds on earth. Stop the pain!
    Seriously – all of the anthropromorphic talk seems to lead to more bad analogies than useful dialogue. I don’t care whether it’s about what my code wants to do or what greedy genes want – the usefulness in my eyes is solely in pop-science lit. We’re in software and should be able to think beyond that. If someone doesn’t understand that "agent A desires" doesn’t mean something else you have one of two problems:
    a) Someone’s trying to be a weird sophist and use your words but not your terms. (I find this common in popular AI detractors.)
    b) That person hasn’t bought your paradigm. It’s a horribly tortured analogy. Stop it!
    Either way, I’d go with b) Stop it!
    7. I completely apologize for my spelling mistakes. I didn’t spell-check any of this and see a couple of glaring errors. If I weren’t s

  4. 4 dcoop-msft 26 January 2005 at 16:01

    -o pissed off at MSN Spaces, maybe I’d have spent the tiem to seem more intelligible. I apologize again. (grumble)

    Anyway – please post a longer series of your thoughts on agents, broken into tasty bite-sized pieces. I’d like to see it. (As would I like to see more autonomous/semi-autonomous agents in our products.)

    They post things in REVERSE ORDER! Again – WTF??? And again – MSN – you hiring testers? I have some bugs for you.

    (P.S. I’m DCoop here at MSFT if you want to drop me a line. (Yes, also for the MSN Spaces folks – hire me or tell me how much you don’t like my criticism, whatever. Better yet – fix MSN Spaces!))

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Follow me on Twitter



%d bloggers like this: