by Erzsébet Csuhaj-Varjú and György Vaszil
Computer and Automation Research Institute
1111 Budapest, Kende u. 13-17
E-mails: {csuhaj,vaszil}@sztaki.hu
Copyright by Erzsébet Csuhaj-Varjú and György Vaszil.
Hungarian Academy of Sciences
Coherent PC grammar systems are introduced. The components of these systems may continue the derivation after the appearance and before the replacement of a query symbol, if the queried string contains nonterminals that can not be rewritten by the querying grammar. The querying grammar may wait for a string which does not contain ``unrewritable'' symbols.
It is shown, that with these modifications the generated language with regular components and returning systems is still recognizable by by O(logn)-space bounded non-deterministic Turing machines, but they can generate a language which is ``O(logn)-space bounded non-deterministic Turing machines''-complete, the language of ``ordered reachability''. Also: There are context-free languages which can not be generated by these PC grammar systems unless O(logn)-space bounded Turing machines are as powerful as O(logn)-space bounded auxiliary pushdown automata.
Centralized PC grammar systems with linear components are studied. (The definition of a communication step is reformulated, but this formulation is different from the usual one, it defines the same mode as communication mode (b) in [28], but the two formulations are equivalent in the centralized or linear cases.)
The language (anbncn)2 can be generated by returning or non-returning centralized systems with regular components. The language anbmcndm can be generated by returning or non-returning centralized systems with algebraic components.
The class of languages generated by non-returning centralized systems with regular or linear components is closed under union, intersection with rational sets, and with regular components it is also closed under product.
Accepting and analyzing versions of PC grammar systems with context-free productions are discussed and characterizations of recursively enumerable languages are given.
In accepting systems rules of the form a® A are used, where a is a word and A is a nonterminal or a query symbol. These rules are applied as in the generating case, the language consists of all terminal words which can derive the axiom. It is proven that all types of these accepting grammar systems describe the family of recursively enumerable languages, even if l-rules are forbidden.
Analyzing PC grammar systems perform the derivations of their generative counterparts backwards, which requires a modification of the generating derivation concept to strong returning PC grammar systems, which also generate the family of recursively enumerable languages.
Preliminary version of [5].
Full version of [3].
Languages generated by returning or non-returning PC grammar systems with linear components can be accepted by O(logn)-space bounded non-deterministic Turing machines.
Consequences: The characteristic function of these languages can be computed by Boolean circuits of polynomial size and O(log2 n) depth. The class of these languages is strictly contained in the class of context-sensitive languages. Each language generated by these systems is O(logn)-space reducible to a context-free language. There are context-free languages which can not be generated by these PC grammar systems unless O(logn)-space bounded Turing machines are as powerful as O(logn)-space bounded auxiliary pushdown automata (which is unlikely). There are languages generated by context-free PC grammar systems which can not be generated by these PC grammar systems unless O(logn)-space bounded Turing machines are as powerful as O(logn)-space bounded auxiliary pushdown automata (which is unlikely).
Languages generated by returning PC grammar systems with regular components can be accepted by O(logn)-space bounded non-deterministic Turing machines. (The question was left open in [44].)
Consequences of this result: Each language generated by these systems is O(logn)-space reducible to a context-free language. There are context-free languages which can not be generated by these PC grammar systems unless O(logn)-space bounded Turing machines are as powerful as O(logn)-space bounded auxiliary pushdown automata (which is unlikely). The characteristic function of these languages can be computed by Boolean circuits of polynomial size and O(log2 n) depth.
The possibility of generating languages modelling replication, multiple agreements and crossed agreements (L1, L2, L3) are investigated and results stronger then before (in [84]and [31]) are obtained.
L1 can be generated with a non-returning centralized systems with two regular components. L3 can be generated by a returning or non-returning centralized system with four context-free components.
The paper, while introducing the contents of this special issue, briefly surveys the different fields of the theory of grammar systems and their history, informally reviews different models (CD grammars systems, colonies, eco-grammar systems, networks of language processors) with their different motivations.
The formal language theoretic model of networks of language processors, a common framework for several parallel and distributed symbol processing architectures introduced informally in [23]is presented. In a network of language processors the symbolic process develops in a virtually complete graph having a language processor in each of its nodes. The processors can handle data (process strings according to their rule sets) or communicate data and/or programs (send and accept strings). In the paper the concept of language processors is illustrated with two special cases (both of them being computationally complete and universal), parallel communicating grammar systems communicating by command (see [20]) and test tube distributed systems based on splicing (see [19]). These networks communicate only data and have a finite (non-changing) number of nodes.
A formal language theoretic framework is presented aimed to capture the aspects of networked computing and to deal with questions raised by the appearance of unconventional computational paradigms, such as molecular computing or evolutionary computing. The general concept of networks of language processors is described (see also [10]) and illustrated with examples.
The paper also addresses issues where the emphasis is put not only on identifying languages, but also on the size of string populations present at the nodes (in this case multisets of strings are used where each element can have a different number of copies), see also [23].
By the abstract: Networks of language processors (NLP systems) is a collective term which has been introduced as a formal language theoretic framework for describing symbolic processing in highly (massively) parallel and distributed architectures. Roughly speaking, an NLP system consists of several language determining devices (language processors) which are located at nodes of a virtual graph (a network) and which rewrite strings and communicate them through the network. The paper briefly discusses the model and introduces a particular variant which can be considered as a formal model for collaborating agents which communicate with each other through a network and use recommendations for filtering information.
The main features of the concept of networks of language processors is described (see [10], [11]) and illustrated with two particular cases taken from the area of parallel communicating grammar systems.
Networks of language processors are discussed from the point of view of the following main characteristics: generative power, the role of synchronization and communication, the size of the system and the presentation of the components of the system. Recent important results are summarized according to these aspects.
The paper summarizes some important results on parallel communicating grammar systems, where the components are with bounded size parameters (bounded resources). It is demonstrated that these systems are not only powerful but economical tools for language generation, even the class of recursively enumerable languages can be obtained by simple parallel communicating grammar systems with bounded resources.
The paper is the edited and extended version of [14].
Standard Watson-Crick D0L systems (SWD0L systems) are variants of D0L systems with controlled derivations, motivated by the paradigm of Watson-Crick complementarity in the operational sense. In these systems each letter has a complementary letter and depending on a special condition (a trigger), a derivation step is applied either to the string or to its complementary. A network of standard Watson-Crick D0L systems (an NSWD0L system) is a finite set of SWD0L systems over a common DNA-like alphabet which act on their own strings in parallel and after each derivation step communicate some of the obtained words to each other. The condition for communication is determined by the trigger for turning to the complementary. The paper proves that NSWD0L systems form a class of computationally complete devices, that is, any recursively enumerable language can be identified by a network of standard Watson-Crick D0L systems.
It is shown that there exists a natural number k such that that every recursively enumerable language can be generated by a context-free returning parallel communicating grammar system where the number of nonterminals is less than or equal to this constant. Moreover, the component grammars of the system have a limited number of productions. The result demonstrates that context-free returning parallel communicating grammar systems are economical tools for language generation.
The basic monograph in the area of grammar systems.
A symbol processing mechanism is defined where the components (test tubes) are working as splicing schemes in the sense of T. Head, and are communicating by redistributing the contents of the tubes (similarly to the ``separate'' operation of Lipton-Adleman). In the paper it is shown that systems with finite initial content of the tubes and finite sets of splicing rules associated to each component are computationally complete, they characterize the class of recursively enumerable languages. The existence of universal test tube distributed systems is also obtained on this basis, proving the possibility of designing universal programmable computers with such a structure.
Variants of parallel communicating grammar systems are proposed to be able to simulate data flow in WAWE-like architectures of parallel processors. While in PC grammar systems communication is usually done by request, here it is done by command, the target processors being selected according to the pattern of the string to be communicated. Several variants (motivated by the WAWE paradigm, the Boltzmann machine, and the Connection machine) are considered, one of them is formally defined and investigated in detail (returning systems with maximal derivation in rewriting steps, communication without splitting the word being sent, and concatenating the received messages). These systems are able to generate one letter non-regular languages with three regular components. If we consider context-free components, the class of context-sensitive languages is generated with any number of components if this number is more than one. (With erasing rules, the characterization of recursively enumerable languages is obtained.)
This paper continues a series of papers dealing with the effect of some strategies of communication among the components of an automata system on its computational power, strategies borrowed from the grammar system theory. Automata systems are considered which consist of several pushdown automata which work in parallel and comunicate with each other by means of their stacks. The main topic investigated is the computational power of these mechanisms. It is shown that non-centralized automata systems with three and six components working under non-returning and returning strategy, respectively, are able to recognize all recursively enumerable languages. Homomorphical characterizations of recursively enumerable languages are obtained for the centralized variants. It is also proved that these variants are at least powerful as one-way multihead pushdown automata. Some open problems and further directions of research are finally discussed.
The authors offer a new framework for describing mainly locally connected processor arrays where the cell processors are defined by rewriting systems (grammars). The notion of the CNN template is defined by the local communication rules and the rewriting process following the communication. A striking similarity of the dynamics of the Cellular Networks of Language Processors to the analog CNN dynamics can be demonstrated. As a future example, the dynamic activity pattern of the Internet could also be modelled in this way.
The concept of networks of language processors is introduced as a framework for modelling parallel and distributed computations on multisets of strings (string collections with possibly more than one occurrences of the same string), see also [10], [11].
Properties of networks of parallel language processors (0L or T0L systems at the nodes) are investigated. It is shown, that these devices characterize the languages generated by context conditional ET0L systems (the conditions are the same as the filters used in the network).
Issues concerning the size of string populations at the nodes are also addressed. It is shown that the growth of the number of strings present at a node of a system having deterministic 0L components and random context filters can be described by the growth function of a D0L system. The paper also gives examples of the rich possibilities of functioning modes becoming available if the choice of tables used in the T0L systems at the nodes depends on the work of other agents in the network.
Opposed to networks of language processors communicating by command (see [11]), in this paper the language processors communicate only when necessary, by request (parallel communicating systems of language processors).
A series of results are discussed according to the main characteristics of these systems, generative power, mode of synchronization, mode of communication, form of the components and the size of the system. In the last part of the paper further models are proposed motivated by social filtering systems or collaborative filtering systems and nature motivated computing architectures, such as Watson-Crick 0L systems.
The paper proves that all recursively enumerable languages can be generated by context-free returning parallel communicating grammar systems by showing how the parallel communicating grammars can simulate two-counter machines, a restricted, but computationally complete class of Turing machines. It is also shown, that a bounded number of components (eleven) is enough to reach this power.
The paper introduces the notion of a test tube system with objects, a distributed parallel computing device operating with multisets of symbols, motivated by characteristics of biochemical processes. It is demonstrated that these constructs are suitable tools for computing, any recursively enumerable set can be identified by a TTO system. Some open questions arising from the unconventional nature of this computational tool are raised.
The generative power of context-free returning parallel communicating grammar systems is studied using different synchronization mechanisms and communication protocols. The equivalence of systems using three different communication protocols and systems with or without rule-synchronization is demonstrated. A normal form for context-free returning systems is also given, it is shown that all languages generated by these systems can also be generated with rules having at most two nonterminals/terminals or one query symbol on the right-hand side.
The authors examine the generative power of parallel communicating grammar systems with contex-free or E0L components communicating incomplete information, that is only subwords, prefixes, or suffixes of their sentential forms. They prove, that these systems in most cases, even with E0L components, generate all recursively enumerable languages.
The concept of splicing (an operation on strings motivated by the cut and recombination of DNA strands) and of PC grammar systems are combined. The grammars work as usual, communication is governed by splicing rules.
Three context-free components can generate all recursively enumerable languages. The language class generated by three regular components strictly includes the class of regular languages, two regular components can generate non-linear languages, three regular components can generate non-context-free languages.
The class of languages generated by returning and centralized regular PC grammar systems is strictly included in the class generated by splicing grammar systems with regular components (in an individual component).
Summary of notions and results in grammar systems theory.
Non-returning PC grammar systems with n right-linear or context-free components are simulated by returning PC grammar systems with 4n2-3n+1 right-linear or context-free components.
It is shown that the class of linear languages is strictly included in the class of languages generated by returning PC grammar systems with right linear components. The language classes generated with right-linear components by centralized returning and by centralized non-returning systems are incomparable. It is also shown, that centralized returning systems with right-linear rules are strictly more powerful, than systems of the same type with regular rules in the restricted sense. The power of being right-linear lies in ``chain rules'' as shown in the last theorem, which states that each system with right linear rules (centralized or non-centralized) have an equvalent with regular rules and chain rules.
The power of patterns and PC grammar systems are compared.
Multi-pattern languages with context-free variables are strictly included in the class of languages generated by returning or non-returning centralized PC grammar systems with context-free components (obvious). If we consider regular components, the generated language class is incomparable with multi-pattern languages with context-free or with regular variables. In the non-centralized case the language class generated with returning or non-returning PC grammar systems with regular components strictly includes the languages defined by multi-patterns with regular variables.
Infinite multi-patterns (regular or context-free) are also discussed. The languages defined by regular multi-patterns with regular variables are strictly included in the class of languages generated by returning or non-returning PC grammar systems with regular components. The same holds for context-free patterns with context-free variables and PC grammar systems with context-free components, in the non-returning case it even holds for centralized systems.
``Somewhat forgotten'' models of cooperating automata are presented from the seventies and early eighties, such as n-parallel grammars (R. D. Rosebrugh, D. Wood) and synchronously working communicating finite automata (A. V. Aho, J. D. Ullmann, A. D. Wyner, M. Yannakakis).
See [37].
A new variant of parallel communicating grammar systems, called PC grammar systems with terminal transmission, PCGSTT for short, motivated by questions concerning grammatical inference of these systems is introduced. It is shown that the right-linear variants of PCGSTT are closed under gsm mappings (in particular, under intersection with regular sets and under homomorphisms) and union, a slight variant is, in addition, closed under concatenation and star. It is proved that the generative capacity of these systems lies between that of n-parallel grammars introduced by Wood and that of matrix languages of index n. As a consequence of these results, a fixed memebership is in NL. The author proves that some particular variants (terminal distinguishable right-linera ones) are identifiable in the limit if certain data communication information is supplied in addition.
The journal version of [37].
In [91]it was shown that returning PC grammar systems with three context-sensitive components generate all recursively enumerable languages. In this paper it is shown that the number of components in this theorem can not be reduced, two components still generate context-sensitive languages. It is also shown, that in the non-returning case two components are enough to generate all recursively enumerable languages. In systems considered in the last theorem of the paper the rules Si® l are allowed for all i, 1 £ i £ n (Si is the start symbol of the component grammars). These systems generate all recursively enumerable languages with two components in all cases (returning or non-returning, centralized or non-centralized).
The generative capacity of PC grammar systems with three components is investigated. All possible communication structures (64 possibilities) are considered for regular components and returning systems, some for context-free and also for non-returning systems.
A sufficient condition for a returning parallel communicating grammar system, with regular components, over a one-letter alphabet, to generate a regular language is given. It is proved that this condition is an extension of D. Pardubska's condition in [72].
An ET0L system is associated to each returning right linear PC grammar system and the first theorem states that if this ET0L systems generates a finite language then the corresponding PC grammar system (it is over a one letter alphabet) generates a regular language. This new condition also holds for a certain system which generates a regular language but does not satisfy the restrictions given in [72]. Then the condition from [72]is shown to be implied by the finiteness of this language.
The generative capacity of splicing grammar systems is investigated in this paper. It is proved that any linear language can be generated by a splicing grammar system with two regular components; any context-free language can be generated by a splicing grammar system with three regular components; any recursively enumerable language can be generated by a splicing grammar system with four right linear components.
Returning PC grammar systems with regular components are characterized by sequential complexity classes depending on their communication structure and their communication complexity. Communication complexity hierarchies are obtained, together with pumping lemmas and infinite hierarchies on the number of components.
The lower bounds on communication complexity measures of language generation of parallel communicating grammar systems are investigated. The first result shows that there exists a language that can be generated by a dag-PCGS (a PCGS with communication structure realizable by a directed acyclic graph) consisting of 3 grammars, but by no PCGS with a tree communication structure. The second result shows that dag-PCGSs have either constant or linear communication complexity of language generation.
PC grammar systems communicating by command are studied. Multiple communication (a component may receive a concatenation of sentential forms from other components) and single communication (a component may receive only one sentential form) are distinguished.
The hierarchies for regular and linear components collapse in the single communication case, the hierarchy for context-sensitive components collapses in both cases. This later result together with [20]also shows that the hierarchy for context-free components also collapses in both cases.
In this paper it is shown that rammar systems with regular components and communication by command generate exactly all context-sensitive languages, which strengthens a known result. By allowing the splitting of the sentential form to be communicated, the authors obtain a similar regular characterization of the family of recursively enumerable languages. The results proved are optimal with respect to the number of components and a complete diagram is presented which relates the generative power of all investigated types of systems with communication by command to the families of regular, context-free, context-sensitive, and recursively enumerable languages.
PC grammar systems with communication by command with or without splitting are studied. (A regular language is associated to each component, when a string belonging to one of these languages is derived in a component it is sent to the other component corresponding to the language. Splitting: if a sentential form is a concatenation of strings belonging to languages of other components, each subword can be sent to the component it belongs to.)
With splitting three regular components are enough to generate any recursively enumerable language, without splitting three regular components generate all context-sensitive languages.
In the second part of the paper 2-level relabeling systems are introduced to present the model of the first part without reference to grammar systems.
The paper demonstrates analogies between the theory of autolexical syntax and grammar systems, and suggest the suitability of grammar systems in approaching linguistic matters.
It is shown in this paper, that the classes of languages generated by centralized and non-centralized returning PC grammar systems with regular components form a strict hierarchy, for all n the language class generated by n+1 components strictly includes the class generated by n components. A pumping lemma is also presented for languages generated by centralized systems with n components.
PC grammar systems with features inspired by Hoare's model of communicating sequential processes are considered. PC grammar systems with set queries (instead of query symbols, a set of symbols appears in the sentential forms, one of them has to be chosen and the corresponding sentential form replaces the set-symbol in the string) have the same power as ordinary systems.
This holds also for PC grammar systems with queries by nonterminals (components have different nonterminal sets, when a nonterminal appears which can not be rewritten, but it is a start symbol of an other component, then the sentential form of that component replaces the nonterminal).
PC grammar systems with flags (there are flagged symbols denoting ``send me to component i'' or ``replace me with the string of component j'', when a flagged symbol appears it remains unchanged until the ``pair'' flag appears in an other sentential form, in which case one of them replaces the other one) in the centralized returning case have the same power as unsynchronized centralized returning systems. Also, there are languages which can be generated by a centralized non-returning system, but can not be generated by a centralized returning system.
PC grammar systems with filters (a regular language is associated to each query symbol, the symbols can be replaced only if the replacing string is element of the language) can generate all recursively enumerable languages. In the centralized case they are not stronger than usual centralized systems.
The paper surveys the motivations and the background of the concept of grammar systems and discusses what is the contribution of this field to the development of formal language theory.
The authors give a short summary of the main achievements of field and sketch some possible new directions.
The authors briefly introduce grammar systems, discuss the motivations of this concept, overview the recent trends and preview some possibilities in the area.
It is shown that non returning parallel communicating grammar systems with context-free components generate all recursively enumerable languages.
The construction is based on a theorem of Matijasevic (1970), which states that for each recursively enumerable set A Ì N there exist polinomials P1,¼,Pm with n+1 variables in such a way that y Î A if and only if there are x1,¼, xn Î N with Pi(y,x1,¼,xn) = 0 for all 1 £ i £ m.
The constructed system generates arbitrary words together with their unary encodings and arbitrary variables x1,¼, xn also in unary form, then checks whether or not the these n+1 numbers are solutions of the polinomials P1,¼,Pm, in which case the word in question becomes part of the generated language.
To achieve this task, the system needs component grammars which can perform addition, multiplication, and multiplication by a constant on unary coded numbers, which is possible to do in a non-returning context-free system.
It is shown that languages generated by matrix grammars without appearance checking with or without l-rules can be generated by returning PC grammar systems with context-free components with or without l-rules.
It is also shown that languages generated by centralized returning context-free PC grammar systems without l-rules and with other restrictions described in the paper (but these restrictions still allow the generateion of non-context-free languages) can also be generated by matrix grammars without l-rules and with appearance checking.
In the next part of the paper PC grammar systems with leftmost derivations are introduced, and it is shown that the class of languages generated by simple matrix grammars with l-rules is included in the class of languages generated by returning context-free PC grammar systems with leftmost derivations and l-rules, which inclusion is strict if we do not have l-rules.
In the last part of the paper PC grammar systems with rule-synchronization or symbol-synchronization together with appearance checking are considered. It is shown that with context-free components these systems generate all languages generated by matrix grammars with appearance checking, and all recursively enumerable languages if l-rules can be used.
It is shown that unsynchronized, returning and centralized systems with context-free components and without multiple queries generate only context-free languages (a systems is unsynchronized if components do not necessarily use a rewriting rule in each rewriting step, a query is multiple if a sentential form contains more than one occurrences of the same query symbol). In the non-centralized case these systems can generate non-context-free languages with three components.
In the next part of the paper returning and non-returning systems are compared, it is shown that centralized non-returning systems can be simulated with returning (but non-centralized) systems in case of linear or context-free components, with or without multiple queries.
``The paper considers the simplest class of parallel communicating grammar systems (PCGSs), namely those with regular components, which as yet seems not to be sufficiently investigated. It is proved that in the centralized returning case (deriving in the usual mode) and in the non-centralized non-returning case with terminal derivation the systems are not `too powerful'; more precisely, they can be simulated by finite index matrix grammars.'' (Terminal derivation: only terminal strings can be communicated.)
``A variant of parallel communicating grammar systems is considered, which can generate in the centralized case the recursively enumerable languages by context-free, even right-linear production rules. Moreover, the presented model can be naturally modified in order to obtain a parallel communicating grammar system with an infinite number of components.''
The generative capacity of parallel communicating grammar systems is considered in the context that the component grammars have distinct terminal and nonterminal sets. In the regular case, this results in strictly more powerful systems in comparison to the classical ones. In the context-free case, a characterization of recursively enumerable languages is obtained when l-rules are allowed in non-centralized returning systems, deriving in the synchronized mode. Unsynchronized context-free systems with separated alphabets have the same power as the corresponding usual system. The paper continues the research started in [66].
Szilard languages associated to PC grammar systems are considered. Several relations among the generated language classes and the associated Szilard languages or among the Szilard languages themselves are proved for regular, linear and context-free components.
Hybrid PC grammar systems are introduced: some components are returning, others are non-returning. In the non-centralized case, hybrid systems have the same generative capacity as the usual ones with right linear, linear and context-free components (the regular case remains open). For regular components hybrid centralized systems are strictly stronger than returning or non-returning centralized systems, for context-free systems this holds for the non-returning case (and conjectured also for the returning one). The language classes generated by centralized hybrid systems with regular or right-linear components is incomparable with the class of linear languages, but in the non-centralized case linear languages are included in the other class.
``A variant of parallel communicating grammar systems is considered, where the grammar components have distinct terminal and non-terminal sets. The generative capacity is studied for the case of systems with regular components, when deriving in the synchronized, unsynchronized or terminal mode. Just as in the case of cooperating distributed grammar systems, introducing such features increases the power of these systems. This is not true for unsynchronized context-free systems: systems with distinct terminals and non-terminals have the same power as usual systems. The case of synchronized context-free systems remains open.''
As far as parallel communicating grammar systems are concerned, the dissertation is based on the results of [59], [60], [61], [63], [62], [64], [66], [65], [68], [70], and [69].
Variants of PC grammar systems are proposed with motivations from artificial intelligence.
Hybrid PC grammar systems are systems where in each time unit components are allowed to take a given (but different) number of derivation steps, any number of steps, or to work until they can (as in the different derivation modes defined for CD grammar systems). Hybridity is also proposed in an other sense, some components communicate in returning, others in non-returning mode (see [65]).
In PC grammar systems with graph control, a communication graph is associated to the systems, components are allowed to communicate only with those other components to which they are connected.
In PC grammar systems with renaming (a notion introduced by M. D. Jiménez-López in 1996) a query symbol not only gives the index of the queried component, but also an element of a given set of weak codes to encode the transmitted sentential form before it replaces the query symbol.
In PC grammar systems with hypothesis language only strings belonging to a given regular language can be communicated.
Teams in PC grammar systems and PC grammar systems with separated alphabets (see [63], [66]) are also proposed. The paper ends with a few remarks on PC grammar systems and natural language processing.
The coverability tree of non-returning regular and linear PC grammar systems is considered, but only for terminal derivations (some branches are cut off).
Szilard languages of the transitions of these PC grammar systems are introduced. It is shown that the Szilard and non-Szilard languages are regular for linear or regular components and a pumping lemma for these languages is presented.
Decidable: a given transition is used (in terminal derivations), a given component have asked an other one or have been asked by an other one in communication, the number of communications are bounded, terminal derivations are conflict-free, emptiness, finiteness.
Returning PC grammar systems with right linear rules are investigated. The number of communications (not symbols as in [94]) as a communication complexity measure is considered. For each k there is a language which can not be generated with o(Ö[k]n) communications, but it can be generated with O(Ö[k]n) communications (n is the length of the generated string).
Right linear returning PC grammar systems with different communication structures are investigated. If the communication graph has at most one cycle, in which case the cycle contains the master component, then the generated language over a one letter alphabet is regular. With more cycles or with one cycle not containing the master even these systems generate non-regular languages over a one letter alphabet.
Unsynchronized PC grammar systems are investigated in this paper. It is shown that unsynchronized centralized systems with regular or linear components generate only regular or linear languages, with context-free rules two components are enough to generate a non-context-free language (still in the centralized case) but these systems are not able to generate languages which can not be obtained by centralized and not unsynchronized PC grammar systems with context-free components.
Centralized PC grammar systems with context-free components are investigated in this paper. Centralized systems with linear components generate only semi-linear languages, with context-free rules two components are enough to generate a non-semi-linear language. If we consider linear components, the language class generated by centralized systems is strictly included in the class generated by non-centralized systems.
Motivations and definitions of (conditional grammars,) cooperating distributed grammar systems, and PC grammar systems are presented with examples.
``The paper is devoted to the study of parallel communicating grammar systems, previously introduced by the same author. As a matter of fact, this study is much more related to formal language theory than to concurrency theory. The results which are given are decision properties and closure properties.''
The complexity of generating a language by a context-free grammar or by a parallel communicating grammar system (PCGS), is compared in the sense of Gruska's measures Var, Prod, Symb. Then a specific measure for PCGS, Com, is defined dealing with the number of communication symbols appearing in a derivation. The results are the expected ones: the PCGS are definitely more efficient than context-free grammars (the assertion will receive a precise meaning in Section 2), the parameter Com introduces an infinite hierarchy of languages, is incomparable with Var, Prod, Symb, and cannot be algorithmically computed.
Parallel communicating L systems (with several types of 0L components) are introduced and their generative power is investigated. The class of languages generated by three 0L, T0L, E0L, or ET0L components (non-centralized, returning systems) strictly includes the class of languages generated by 0L, T0L, E0L, or ET0L systems.
In case of T0L or ET0L and DT0L, or EDT0L components the theorem above holds already for two components. The same holds also for 0L, E0L and D0L components.
The class of languages generated by PCnT0L systems are ``not too large'': there is an E0L (and EDT0L) language, which can not be generated by these systems. In the last part of the paper it is shown that all languages generated by returning PC systems with ED0L components can also be generated by EDT0L systems, in this case the parallel work of the system can be simulated by (deterministic) tables.
Unsynchronized systems and systems with extra synchronization added are investigated. In [73]the strictness of the inclusions were shown with counterexamples, here a pumping condition is given which all languages generated by unsynchronized, centralized, non-returning PC grammar systems with context-free components must satisfy.
Rule and symbol synchronization is introduced. It is shown that systems synchronized in any of these two ways generate the same classes of languages, and centralized systems with two regular components can generate all regular simple matrix languages.
Terminal synchronization is also introduced (only terminal strings can be communicated). In the centralized (returning or non-returning) cases with regular or linear components, all terminally synchronized language can be generated by a normal system of the same type with two components. Two linear components (centralized, returning) can generate non-context-free languages, but centralized and returning systems can not generate languages which can not be generated by linear simple matrix grammars with three components. This does not hold in the context-free case, two centralized (returning, non-returning) components can generate a non-simple matrix language.
Synchronization by markers is also introduced. (If certain symbols are present in the sentential form of the master grammar, then the strings of the other components has to satisfy a random context condition, a separate condition is given for each ``certain symbol'' in the master.) The language classes generated by regular and centralized systems synchronized this way properly include the classes generated by the same system without marker synchronization.
The history and an overview of results about cooperating distributed and PC grammar systems are given (until 1995) together with examples, open problems and bibliographical information.
The paper presents notions and results not included in the monograph [18]. It starts by introducing the basic definitions and the basic results of grammar system theory; then a survey of results is given on the power of hybrid cooperating distributed (CD) grammar systems, on teams in CD grammar systems, and on parallel communicating (PC) grammar systems with context-free and context-sensitive components. Other directions of research are also briefly mentioned, as well as a series of open problems.
The generative capacity of the splicing grammar systems is investigated. The notion was introduced in [30] as a variant of parallel communicating grammar systems: instead of communication, one considers splicing operations between the sentential forms currently generated by the components of the system. It is proved that such systems with only two (l-free) context-free components can generate all recursively enumerable languages, and that systems with two regular components can generate non-context-free languages. Then it is shown that each recursively enumerable language can be represented as the intersection of a regular language and the language generated by a splicing grammar system with three regular components. The first result answers a problem left open in the paper cited above; the last two improve results in that paper.
First several results are recalled concerning the generative power of parallel communicating (PC) grammar systems, including characterizations of recursively enumerable (RE) languages starting from PC grammar systems and their languages. Then it is proved that the simple matrix languages can be generated by PC grammar systems, and finally a new class of PC grammar systems is introduced: when a component has to communicate, it may transmit any non-empty prefix of its current sentential form. Each RE language is the morphic image of the intersection with a regular language of a language generated by such a system. A number of open problems are pointed out in this context.
In a parallel communicating grammar system, several grammars work together, synchronously, on their own sentential forms, and communicate on request. Usually no restriction is imposed on the communicated strings. The paper considers here two types of restrictions as models of the negotiation process in multi-agent systems: (1) conditions formulated on the transmitted strings, and (2) conditions formulated on the resulting string in the receiving components. These conditions are expressed in terms of regular languages to which the mentioned strings should belong. The communicated string is allowed to be a part (any one, a prefix, a maximal or a minimal one, etc.) of the string of the communicating component. These variants are investigated from the point of view of the generative capacity of the corresponding parallel communicating grammar systems.
Unsynchronized PC grammar systems are investigated also in the non-centralized case. Returning systems with two regular components generate non-regular languages (but this class of languages is strictly included in the class of context-free languages), with two linear components non-context-free languages. Centralized and non-returning systems with two regular components generate non-semi-linear languages, with two context-free components one-letter non-regular languages.
Parallel communicating grammar systems are introduced, and the centralized case with regular components is investigated.
Systems with context-sensitive components are investigated. It is shown, that any recursively enumerable language can be generated by a non-centralized returning PC grammar system with three context-sensitive components (``two is not enough'' will be shown in [39]).
By the abstract: We shall show, that simple communication can be substituted by parallel rewriting and nondeterminism without time penalty. This is no longer true for more complex communication. In particular, we shall show that time preserving simulation of regular PCGS by g-systems is possible whereas time preserving simulation of context-free PCGS is impossible.
By the abstract: We consider the automata systems consisting of several pushdown automata working in parallel and communicating the contents of their stacks by request using filtering technique. We prove that these systems with two components accept all recursively enumerable languages both in the centralized and in the non-centralized modes and in both returning and non-returning strategies. We also show that centralized non-returning parallel communicating pushdown automata systems without filters recognize all recursively enumerable languages.
Decision problems are investigated for non-returning context-free systems and context-sensitive (returning, non-returning) systems.
For non-returning context-free systems the notion of coverability tree is defined. The vertices of this tree correspond to vectors, where these vectors correspond to configurations of the system by storing the number of occurrences of each nonterminal in each sentential form of the system in a given configuration. This tree is made finite (and effectively constructable, which is shown) by cutting off infinite paths, substituting them with leaves. An infinite path can be cut off, if the numbers of nonterminals in the sentential forms are not decreasing and no communications are involved. If the number of a symbol is unboundedly increases in the path, it is denoted by w in the vector of the replacing leaf.
Decidable: a given transition is enabled (and several other properties based on this one), a given system has only a bounded number of communications.
Undecidable for context-sensitive systems (the proofs are based on the undecidability of the c-reachability problem, which is also proved here): a given transition is enabled, a circular configuration can be reached, a system works like a centralized one, a systems is conflict-free (no to components request the same sentential form in any communication), a given system has only a bounded number of communication (these were the problems showed to be decidable for the non-returning context-free case).
Since any recursively enumerable language can be generated by returning systems with three context-sensitive components (or returning, non-returning systems with two components if we use l-rules), the questions undecidable for recursively enumerable languages are also undecidable for context-sensitive PC grammar systems of these types (membership, equivalence, finiteness, inclusion, emptiness).
The reachability graph is introduced for regular PC grammar systems, the nodes are so-called ``nonterminal cuts'', an arc leads from one node to an other if the systems can reach from the first n-cut the other one. Based on this graph, all decidability questions considered in [96]are decidable for regular PC grammar systems.
In the next part of the paper processes and partial words of PC grammar systems are introduced and studied. Finally the paper gives vectorial characterizations of languages of PC grammar systems.
The first part of the paper repeats the decidability results of [96]for non-returning context-free PC grammar systems, then bounded PC grammar systems are introduced: time bounded (all communications happen in the first k steps) or bounded (at most k communications happen) systems. Time bounded PC grammar systems have the same power as their component grammar types. For bounded PC grammar systems the centralized case is investigated with regular components. With the aid of a pumping lemma it is shown that the language classes generated by these systems form a strict (infinite) hierarchy based on the number of communications allowed. The question whether or not we get a hierarchy based on the number of components if we fix the number of communications remains open, except for the case when the number of communications is one. This way all systems generate the same class of languages. The languages generated by centralized k-bounded systems (regular components) are strictly included in the class of languages generated by k+1-linear grammars.
Two new derivation modes are introduced for parallel communicating grammar systems. One of them is called competitive, the other is called popular, they both eliminate the hierarchy among the component grammars. The generative power of parallel communicating grammar systems working in these new modes is investigated, with different types of grammars and extended Lindenmayer systems as components.
A slightly modified communication protocol called immediate communication is introduced for parallel communicating grammar systems and the generative power of these systems is shown to be equal to what we call homogeneous systems, systems with queries of a special form. To acquire this result, a generalization of returning systems, called systems with returning languages are also introduced and studied.
Non-returning PC grammar systems with n context free components are simulated in [32]by returning PC grammar systems with 4n2-3n+1 components. In this paper the number of components of the simulating system is reduced by using a different simulating construction. The number of simulating components can be further decreased if the queries appearing in some of the components satisfy a simple condition,which all queries in regular and linear grammars naturally do. This number can be as low as 2n, while in general it is still (n+1)2.
In this paper parallel communicating systems of extended Lindenmayer systems working in three different communication modes are studied. It is pointed out, that previous normal form results hold in all three cases, and it is proved that similarly to parallel communicating grammars, the generative power of parallel communicating Lindenmayer systems is equal, using any of the three communication modes.
The paper investigates parallel communicating systems of extended Lindenmayer systems. It is shown that parallel communicating E0L systems have the same power as parallel communicating ET0L systems, the use of tables can be simulated by additional E0L components and the number of the new components does not depend on the number of tables to be simulated. Finally, it is proved that each parallel communicating E0L and ET0L system has an equivalent system with rules of a very simple form.
The paper continues the investigations concerning the generative power of parallel communicating grammar systems working in the competitive and popular derivation modes, introduced in [99], with Chomsky grammars and extended Lindenmayer systems as components. It is shown that returning and non-returning parallel communicating systems with right-linear or linear grammars as components have the same generative power in these modes as they have in the ususal one, which also holds for returning systems with context-free grammars and extended 0L systems as components.
The dissertation contains results of [26], [28], [99], [100], [101], [102], [103], [104].
First a review is presented on some of the previous research on the generative capacity of parallel communicating Lindenmayer systems, then new results are stated about the power of parallel communicating 0L systems and also a way to increase this power is proposed by allowing query symbols to appear in the axioms of the components.
The generative power of parallel communicating systems with extended Lindenmayer systems as components is investigated. It is proved that similarly to the context-free case, non-returning systems can be simulated by returning systems, then it is demonstrated that the power of non-returning systems by showing that they are able to generate the twin-shuffle language over arbitrary alphabets.
In this paper it is proved that the family of languages generated by the non-centralized parallel communicating grammar systems with two components and right-linear rules is included in the family of context-free languages. Thus, a generalization of Theorem 3 of [89], which states the inclusion for the centralized case, is obtained.
By the abstract: The paper investigates parallel communicating systems where the components of the systems are given by k-limited and uniformly k-limited 0L systems (or by such ET0L systems, more generally). Parallel communication increases the generative power of the underlying systems. The language families generated by such systems are compared with each other as well as with other known language families.