I’ve been requested to ellaborate on my recent post on agent-oriented software engineering. With pleasure.
I am quite new to agent-oriented software engineering (AOSE). Actually, I’ve been seriously involved with for the last 9 months. If you don’t count my juvenile experiences with strange attractors, chaos theory and cousins of Mandelbrot. My background is extremely mixed: I have degrees in fundamental biology, applied electronics and archaeology; I have started two businesses and I have worked for universities, private companies (both startups and mega-corporations) and government bodies. I am not trying to show off, but to make clear that my approach to AOSE, as to everything else, is extremely eclectic and “lateral”. I tend to question everything, which often results in great learning experiences and sometimes in embarrasing foot-in-my-mouth situations. As my friend Ghassan says, I like rocking the boat. Nicely.
I currently work for the University of Technology, Sydney as a research fellow. Of course, this blog reflects my views and only my views, and I am not representing the UTS when I give them. I have been hired to take major part in the development of an agent-oriented methodological framework based on the method engineering paradigm. Basically, this means that we must apply method engineering principles to build a methodological framework that supports (or rather, is optimised for) the development of agent-oriented systems.
You may or may not be familiar with the method engineering paradigm. Basically, it says that no method is appropriate for all the situations, so rather than having “the” method, what you should have is a repository of method fragments from which a method engineer can pick, mix and match to construct customised methods as necessary. Using this approach, a method engineer can construct an organisation-wide method for your company by selecting the best method fragments from the repository and combining them in the appropriate way. Later, that organisation-wide method can be refined into application domain-specific methods or even project-specific methods if necessary. But the key idea is: build your methods by assembling method fragments from a pre-existing repository.
A method fragment repository looks like a database that stores, well, method fragments. A method fragments is a self-contained specification for either a job that must be done (such as a task or a technique), a product that can be built (a document, model, piece of software…) or an organisational entity that can be involved in doing so (a team, a role…). Somebody is supposed to design this database and, what is most important, populate it for you with state-of-the-art method fragments drawn from industrty best practices. This “somebody” is, in this case, yours truly.
Since we have been working in method engineering for a while, we already have some stuff done and ready. We have experience, bibliography, industry experience and contacts, and some software tools developed in house. I have been applying all this from an object-oriented perspective for the last years. The task at hand now is: apply all this from an agent-oriented perspective.
When confronted to this proposal, my first question is: “but, do I need to do anything special to make method engineering applicable to agents”? I would tend to say no. Let me explain. Method engineering is about constructing methods. What those methods say or what for they are is not really important. Is like a filmmaking school: you learn how to make films. You learn the techniques, meet producers and directors, learn some theory… But the contents of your films are up to you. From this perspective, it is clear that it does not make sense to even propose applying method engineering to agents. Method engineering is helpful to build and customise methods: whether the methods that you create are intended to develop agent-oriented software, object-oriented systems, relational databases or railway models, we don’t care. Method engineering is one abstraction level above.
But then you could argue: but the method fragments in the method repository are related to what kind of thing the methods can produce. For example, a method to create railway models will probably need, at some point, a specification of the process to follow in order to create the track layout on the board. If nobody puts this method fragment in, we cannot generate a method that is capable of addresing the development of railway models. Similarly, I can argue that, in order to create an agent-oriented software system, the method fragment repository will need many method fragments that are tightly connected with the fact that we are aiming to produce agent-oriented software. If this reasoning is OK, then what I need to do is just populate the method fragment repository with a lot of fragments that are specially designed for agent-oriented software development.
How can I “create” these method fragments? Well, very few works on method engineering, as far as I know, deal with the problem of generating method fragments from a theoretical and methodological perspective. Usually, they assume that method fragments already exist. But, in real life, having a good repository plenty of high-quality fragments is key in producing a good method. Oh well… What we do is extract fragments from existing “monolithic” methods. That is, you find an existing method, read and think about it until you understand it (more or less) and then try to isolate useful chunks. With some experience, this can become relatively easy. Some of my colleagues have written a series of papers each looking at different agent-oriented methodologies (such as Gaia, Tropos, MASE or CAMLE) and extracting useful method fragments from them.
In any case, what I need to do is to understand what building an agent-oriented system means. Some people in my team are agent experts, so they can help with that. However, my feeling after some months working in this direction is that most people working in agent-oriented methodologies come from artificial intelligence or related backgrounds, very few coming from engineering areas. One manifestation of this is that the existing methodologies are defined in a very informal way, sometimes containing significant ambiguities. Also, most key terms (including “agent”, “message”, “environment”, etc.) are not defined, so each author assumes an interpretation that very often is not shared by others. I don’t want to sound as if I was just bashing AI or agents; this is a serious and well-thought conclusion, shared by some of my colleagues, and our response to it is constructive: we have defined the key terms and are working in adding formality to AOSE. While doing this, however, I keep a skeptical and self-critical attitude, which leads me to regularly find questions and doubts. I will post here many of these in bite-sized chunks.