Machine Acting and Contract Law – The Disruptive Factor of Artificial Intelligence for the Freedom Concept of the Private Law

Technologic evolutions of the last two decades, such as the development of the internet, had a strong disruptive effect to the society and the economy. However, because of the flexible concepts of the civil law codifications a disruptive effect in the private law until now did not exist. Especially the legal consequences of the internet were integrated into the private law without bigger categorial or structural changes. This applies equally to most of the cases of the use of artificial intelligence (AI) in recent times. With more advanced development of AI-systems, though, it may not be possible anymore to apply the traditional terms of the private law to the use of AI without leaving the constitutional law background of the private law. This article discusses the impact of the use of a future advanced independent AI on the concept of the private autonomy in the contract law. Furthermore, it gives an overview on the new legislative approach of a human centric use of AI in the European Union. .


Introduction
Nowadays the term ´disruption´ is used frequently, not only within the economy sciences, but also within the legal science. With the rise of the internet in the last decade of the 20 th century, the speed of communication procedures increased significantly. Consequently, one may have considered this already as a certain ´disruption´ within the civil law system. We now are facing another technological revolution with the use of artificial intelligence (AI) in many parts of the economy and the society. This phenomenon might even more rise the question whether there is a ´disruption´ and how the civil law should react to this development. The following article will discuss this question regarding the civil law contract law: 1 It will especially discuss the question, which impact the ´AI-revolution´ on the term of legal intent and behaviour in the private law has.

Research Question.
a. Which is the impact of the integration of AI on the contracting process considering the constitutional basis of the private law and the function of the subjective right as an instrument of individual freedom?
b. Can algorithmic acting be integrated into the contemporary contract law using the method of legal analogy or would it be necessary to develop new forms of contractual processes adapted to algorithmic acting?
c. Which legislative activities are existing in the European Union regarding AI?
a. To underline the necessity to preserve the theoretical foundation of the principle of the subjective right and the contractual will of the legal subject within a constitutional freedom concept of the private law.
b. To analyse the opportunities of integrating algorithmic acting into the contracting process depending on the degree of control by human actors. Models for the attribution of algorithmic acting to human actors or human beneficiaries of the algorithmic acting and of independent algorithmic acting are outlined and discussed.
c. To contribute to the discussion on the significance of behaviour and control in other branches of the private law, such as the competition law and the tort law.
d. To contribute to the general discussion on the development of a human-centric artificial intelligence.
A. Discussion 1. The rise of the internet as non-disruptive factor in the private law.
Discussing about `disruption´ in the legal sciences we should avoid using the term as a mere `catchy term´ without further technical meaning. Disruption generally means a process of radical transformation -for example of societies or of markets -due to the emerge of something radically new which is a `game changer´. Disruption breaks up existing structures and may even destroy them. Following this definition, only the effect of a certain innovation can be considered a real disruption in the legal system which leads to the situation, that existing legal instruments, terms or rules which are fundamental, have to be abandoned or at least interpreted in a new way or have to be amended with new legal terms in order to protect the consistency of the law.
Looking back to the rise of the internet we certainly observe a strong change of the intensity of communication procedures and also an acceleration of communication processes, which for example opened the way for new forms of ad hoc company collaborations and of effective collaborative networks such as the so called ´virtual enterprises´. 2 However, in retrospective, we cannot say, that the fundamental principles of the private law were not adequate due to the effects of the digitalization of communication and the virtualization of social and business activities. The rules of contracting, the obligation law, the basics of the company law or of the tort law principles basically remained applicable without requiring too much effort to adapt to the new given conditions.

Artificial Intelligence as Disruptive Factor in the Contract Law
We recently face the next technical revolution with the development of artificial intelligence, which is already in such an advanced stage of development, that it is used in many parts of the business. We see algorithms adapting prices to changing market conditions and processing data of customers, calculating the grade of willingness to pay of customers in order to do price discriminations between different customers. AI is used in human resources management and in automatized purchasing systems.
Namely the contract law can serve as an example to show how fundamental the question of an integration of the new phenomenon AI into the private law actually is. However, the significance of the analysis of its impact on the contract law is not at all limited to the contract law. Proposals for solutions of the problem in the contract law are also directly instructive for all branches of the private law.
Contracts are closed by contractual parties based on a contractual will or intent. It should be awaited that the fundamental problem of the integration of artificial intelligence into the contract law lies in the definition of the contractual party and therefore in the interpretation of the term of the `legal subject´. Indeed, this is an important problem of the law: An entity, which is not a natural person must have the capacity to legally submit declarations of intent. However, at least with the technique of a personification by legal fiction artificially intelligent entities may be legally acknowledged as legal subjects, 3 then able to submit legal declarations or to be liable in case of damages caused through their activities. 4 A resolution of the European Parliament of 2017 proposed to the member states of the European Union to create a specific legal status for robots as `electronic persons´, which turns out to be an application of the so called `Fiction Theory´. 5 The integration of automatized contract declarations into the grown system of legal declarations based on the idea of private autonomy is more difficult, and obviously here is a real legal disruption coming up as soon as the law accepts artificially intelligent entities or algorithms submitting contractual declarations autonomously. Of course, as a condition for having algorithmic contractual declarations, artificially intelligent entities would have to be acknowledged as a new category of legal subjects. As soon as algorithmic triggered declarations are more than just anticipated declarations of intent of natural persons or of organs of legal persons, comparable to mechanic vending machines, it must be considered, whether we can interpret such declarations as normal legal statements of intent or whether we have to define a new category of `electronic´ legal declarations leading to the new category of `e-contracts´. 6 The fundamental disruptive aspect in this context is the fact of the `will´ or `intent´: Even if a fictional legal capacity of artificially intelligent entities could be acknowledged it cannot simply be said that AI has the capacity of an own `will´. Even if it is `deciding´ something based on acquired experiences through a deep learning capacity and if it is reacting to situations, which it is confronted with, there is no `will´.
The term of `will´ contains several aspects: One aspect is the mental act which gives the impulse to aim for certain goals. This aspect may be fulfilled by AIsystems as they may be able to act strategically. However, already here it is questionable, whether the impulse for acting of an algorithm is based on `mental´ acts comparable to human thinking. At least with the other necessary aspects of the will -wish and desire -we find elements which are connected with emotions. It cannot be claimed that AI can `wish´ something. Consequently, there is no `will´ in a psychological sense.
In the legal sense the contractual will is connected with the concept of the private autonomy and of the contractual freedom as part of the private autonomy. The extensive use of the internet has already changed some frame-conditions of the private autonomy for the natural person due to the anonymity of the contractual partners, the speed of the deciding process of the actors in internet contracting situations and the use of filtering technologies like searching machines. 7 However, those changes can hardly be considered as really disruptive to the concept of the private autonomy or contractual freedom in the private law.
The private autonomy is a fundamental principle of the private law. Contractual freedom gives freedom to choose the subject matter of the contract, the other party, the consideration due and of the terms of the contract to the legal subject within the framework of the legal order and within certain limits, such as the legal principles of morality and the public order. The private autonomy is part of a freedom concept of the legal subject: The legal validity of a private agreement is based, as Werner Flume, an important scholar of the German private law science stated, not only on the general legal order in which context the agreement is concluded. The legal validity is especially based on the acknowledgement of the private autonomy as an element of the acknowledgement of the selfdetermination of human beings as a fundamental principle of the legal order. 8 In 6 AI-Contracts in this sense must be differentiated from `smart contracts´ or `intelligent contracts´. These are protocols which automize the execution of contracts using a distributed, decentralized blockchain-network. The automatization factor of smart contracts is giving them a position close to vending machines. As the private autonomy is an expression and an instrument of personal freedom, an agent of free will is needed which somehow rules or controls the acting of an artificially intelligent `subject´, if we wish to concede a contractual free will to AI. Even in the case of accepting that AI can be a personification in the sense of a (partially) legal subject, this would not mean, that the mentioned concept of freedom can be applied to it. Artificial intelligence is subject to algorithmic determination even if it may have the ability to learn or to invent solutions for certain new problems given.
The private autonomy as a fundamental principle of the civil law based on a freedom concept is influenced not only if an `e-person´ is acknowledged by the law but also -without accepting any autonomous artificially intelligent legal subjects -because the free will is delegated from the natural person to the algorithm if the algorithm is technically able to act independent of its user. 10 Therefore the free will may be displaced by machine determinism. Human would in the last consequence subordinate himself to the machine.
A basic problem of the idea of an `e-person´ is, that a legal subject with full legal capacity necessarily needs free will and, inseparably from this, emotional capacity which gives it the ability to wish and to desire. A personification of AI therefore can, if any, necessarily only be a very limited partially legally capable subject. The acknowledgement of the legal subject would happen only for pragmatic reasons. Its capacity may be limited to the capacity of owning property and, even that, not in order to use `its´ property as an instrument of personal freedom -as it would not need it due to the lack of freedom -but only as liability assets.
This shows, that in order to understand the range of a possible analogy of private law terms for contractual activities of AI, especially of the term of the `contractual intent´, a clear view on the relevance of subjective rights for the personal freedom of the individual is indispensable. The ability of contracting requires legal capacity of the AI-entity. Once we accept autonomous, legally capable `e-persons´ in the private law it is essential to understand, that by doing this we would separate the subjective right from the individual freedom and in last consequence from the human dignity as basis of the legal subject status. The subjective right would lose its function as instrument of individual freedom of the legal subject.
This detachment of the private law subjective right and the contractual freedom from its basis in the individual freedom is not less than a significant and categorial step for the system of the private law. We would change the system of the civil law and its legal philosophical foundation. The subjective right would be no longer reserved for natural persons and for legal persons constituted by their human organs. We would accept a shift to a purely functional concept of the subjective right which would be separated from its basis in the fundamental freedom rights.
This would not only be problematic in a systematic sense, but it would also influence the social concept of the modern private law: Freedom corresponds with responsibility. Subjective rights as instruments of personal freedom correspond with social responsibility. 11 If we separate the contractual will from the freedom concept of the subjective right, then we also give up the legal ethic concept of freedom and responsibility for the contract law. The subjective right would be changed from a social legal concept to a mere pragmatic concept.
One discussed approach for an interpretation of the contractual acting of AI to be integrated into the traditional concept of contracting is an understanding of AI as an `agent´ or `representant´ of a human user. An AI legal subject does not act in order to achieve `egoistic´ goals due to the lack of emotions and desire. AI contracting will be likely always in the interest of a user of the system. As the AI then is not acting in its own interest but in the interest of the user, the declaration of intent is finally serving as an instrument of the contractual freedom of the user and not of the machine. The machine would be a new category of a `technical representant´1 2 of the user with own `control´, but not with own `intent´. In this approach the missing emotional `will´ element of the legal declaration of the `electronic agent´ is replaced by the intent of the human `principal´. However, this approach will fail at least as soon as the connection between user and his AI is lost, namely when the AI is acting independently of any decision or control of its user. 13 The existence of a mere interest -real or assigned -(instead of intent) of the user seems not to be enough to preserve the systematic connection to the freedom concept.
The participation of AI in legal transactions without changing the concept of the subjective right and the contractual will as a concept of personal freedom could be handled under two alternative conditions: First: legal declarations of algorithms may be interpreted as deducted contractual acting originating from the human will of a user. In this solution at least some human determination in the contracting process must exist. The US-Uniform Electronic Transactions Act (UETA) is stipulating in its Section 14: "A contract may be formed by the interaction of electronic agents of the parties, even if no individual was aware of or reviewed the electronic agent´s actions or the resulting terms and agreements". This rule seems to represent an agent-principal approach interpreting electronic acting as contractual declaration in the interest of an individual without any connection to a somehow specific human intent. may speak of an `anticipated will declaration´ by the user which is executed by the AI. This solution will not work anymore, when AI is deciding totally independent of human influence. 14 A second approach of a solution may be the `agent-principal´ solution: Here the AI is not just a technical transmission instrument for the expression of will of a human legal subject but an intermediary of the user or owner of the system. However, as soon as an advanced intelligent program potentially or factually interferes in the contracting process, influencing the negotiation, we will have to qualify its acting analogue to the acting of a representative and therefore to a declaration of own intent of the `representant´.
In this case the already mentioned problem with the term of the contractual `will´ comes up again. Therefore, also under the agent-principal approach, a discussion on the term and on the function of the contractual will or on a separate term of a `machine intent´ must be conducted.
After all we have to constate that the integration of AI in the contracting process is clearly leading to a real disruptive effect in the private law which may essentially change the dogmatic basis of the contract law and of the concept of the subjective right. Without doubt the need to integrate independent acting AIsystems into the private law will grow the more sophisticated such systems become. To the contrary to the situation after the rise of the internet we will not be able to simply apply the traditional legal institutes and terms of the civil law to autonomous e-contracting. The consequence is the need of a decision between two fundamental methodological ways to deal with this situation: a To try to integrate AI-contracting by using the method of analogy extra legem, preserving the grown system of declarations of intent and contracts. b To define a new category of contract elements with own rules and specifications which adapt to the need to protect the interest of the legal communication still including human behaviour and therefore requiring solutions at the `interface´ between human and machine.
The first way seems not preferable. The method of analogy requires basically three conditions: 15 a An analogy must not be excluded by law, b a regulatory gap must exist, and the gap has to be unintended by the legislator and c a comparable interest situation must exist.
The analogy in question would be the replacement of the legal term of contractual or legal will or intent with an algorithmic-determined electronic declaration. Obviously, AI cannot have a (legal) will or intent. Its actions and declarations are driven by the result of algorithmic combinations based on experience and (deep) learning. Its declarations would not be declarations of intent but at the best declarations of combination results. The concept of the legal declaration in the recent form is necessarily based on the free will of human entities. For this it does not matter in any way whether a declaration is assigned to a natural person or a juridical person or collective subject with own legal capacity. 16 Foundations, associations and partnerships act through their organs, their internal decisionmaking procedure is based on the human will of the organs, consequently there is no categorial difference between their legal will and the legal will of an individual but there clearly is a categorial difference to AI-declarations. We therefore see a subsequent (secondary) loophole in the contract law as the legislator of course could not foresee the recent technical development. Whether there is a comparable situation is not clear. At the first view that may be the case as human declarations of will may be replaced by machine declarations and the situations in which we may replace the human declarations would be the same as traditional contracting situations. However, an analogy fails for another reason: It is more than questionable whether a rule, which is inseparably based on a certain fundamental concept -here the concept of individual (or collective) freedom -can be open at all for an analogy which entirely leaves this dogmatic basis. The rules of legal declarations of intent are changed in their essence, if we analogize them in order to open them for AI-declarations. An analogy which changes the dogmatic basis of the rule cannot be accepted. The method of analogy means the application of a general legal principle taken from the legal rule which should be analogized to a different case which is comparable to the case regulated in the law. The general principle taken from the legal rule in question is, that legal declarations are based on legal intent. Applying this to AIdeclarations without the need of any will or intent is not only application but changing the general principle to be applied itself. Therefore, we cannot speak of an analogy here. 16 In the German corporation law a differentiation exists between entities which possess legal personality ("Juristische Personen", i.e. Limited Liability Company "GmbH", Stock Company "AG" and foundation "Stiftung") and entities which have legal capacity but not full legal personality ("Rechtssubjekte ohne Rechtspersönlichkeit", i.e. commercial partnerships like "Offene Handelsgesellschaft OHG" and "Kommanditgesellschaft KG" or nonregistered associations). The legal rule giving legal capacity to commercial partnerships (OHG and KG), Sec. 124 para. 1 of the German Commercial Code, was acknowledged by the Federal Court of Justice as legislative expression of a general principle, transferring it to other entities beyond the Commercial Code, such as the partnership under the Civil Code ("BGB-Gesellschaft"). Dogmatic background for this decision was the Group Theory of Otto von Gierke. In this case the legal subjectivity is not as far going as the legal personality. In the German legal doctrine this is called "Teilrechtsfähigkeit" (partial legal capacity) further developed by Werner Flume

How to Integrate AI Acting into the Contract Law?
Based on this it is obvious that AI contracting actions would not fit into the traditional concept of the declaration of intent in the private law. Any attempt to integrate it by an analogy with the legal intent of natural persons and existing juridical persons would lead to a disruptive interpretation of the fundaments of the private law and its constitutional context. The legal science would have to redefine the fundamentals of the private law starting with the meaning and the purpose of the subjective right and the legal subject. The pragmatic way of a consequent functional understanding of fundamental institutes of the (private) law does not merely mean to open the door for just a new phenomenon of the technologized society. I would in fact mean a -rather dangerous -way of `de-constitutionalizing´ of the private law which would lead to further problems and which should be carefully considered by the legal philosophy as it touches the question of the understanding of law as instrument to regulate the co-living of individuals in societies.
Most consequences are not yet sufficiently discussed, such as the problem of the collision of human moral and machine logic in certain social situations. The fundamental question of the equal treatment of all categories of legal subjects would come up if we postulated a `separate ethic´ of e-persons 17 -different from human ethic -which could legally privilege the logic machine actor in relation to the feeling and flexible acting natural person, for instance within the tort law. Should the human actor still be fully responsible or liable in situations in which the machine actor cannot be liable because of its lack of emotions? In which situations already the individual refusal to use artificial intelligent support systems in social situations could be a reason for liability if the natural person causes a damage, which would not have occurred if a supporting AI-entity would have acted? At present we still only use weak artificial intelligence and we still are able to assign responsibility for machine acting to human users. As soon as machine entities cannot be overruled by human users anymore and as soon as they are installed in a general public interest without giving the individual the choice whether it want to use AI or not, the now still existing solutions to reason the assignment of machine acting to human users will no longer work. We cannot assign a liability to a natural person who is not free to choose whether to use artificial intelligence or not. Latest in this stage of the technological and social development a general shift of legal responsibility and legal risks from individuals to the society must be considered. These questions of a new social risk distribution have to be discussed depending on the political decision for a more (or a less) human-centric artificial intelligence. However, there is an undeniable need to support the technical development of Artificial Intelligence and with it of its integration into the legal system. It cannot be denied that there is a strong practical need for the integration of machine acting into the private law and here especially into the contract law but also into other areas of the law, such as the behavioural context of the competition law. No globally integrated legal system can on long term afford to reject the development of artificial intelligence and to deny the need to support this development and the research and use of respective systems in society and economy.
The specific challenge for the national legislations will be the same as we can already observe it regarding the data protection: here we see a `clash´ between well founded traditional ideas of the protection of the individual against the abuse of individual personal data and the `reality' of a globalized use of media. The European Data Protection Regulation (GDPR) 19 is based on the perspective of an older technological level, when personal data still were to a big part analogue or locally stored. Data were still controllable by the states through their territorial legislative and administrative power. However, nowadays we see a shift from territoriality to ubiquity which results in a loss of control power of the national states over the data of their citizens. A certain remaining power to regulate ubiquitous aspects of the citizens is at most with political and economic powerful nations and state blocks such as the US and the European Union 20 or authoritarian states like the P.R. China. 21 Classic principles of the data protection law such as the rule of data minimisation (Art. 5 par. 1 lit. c EU-GDPR) are scarcely not applicable in a reasonable way without hindering important innovation, namely the development of artificial intelligence systems in the society and economy. As an example, the use of artificial intelligent human resources systems to manage personnel in big enterprises may serve. Such systems must learn and for learning they need training data. In the company practice recently it is a problem to provide enough training data for those systems due to the strict rules of the EU-GDPR, which forbid to store and process personal data without a clearly defined purpose made transparent to the individual before the use.
19 Regulation (EU) 2016/679 of the European Parliament and of the Council of April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). 20 By extraterritorial application of their national rules. Here the economic relevance of the national market leads to a certain international compliance with the national rules of this markets. 21 By separating the national public space from the international space, i.e. using technical control of the data spread within the national population thus excluding the influence of global media.
Data of employees which are stored for the purpose of managing the employee´s labour relation in the company therefore cannot simply used as training data to develop the AI human resources system of the company. The GDPR rules may turn out too strict, leading to the need of equilibrating the relation between the individual interest in privacy and the social interest in innovation support and international competition. It may turn out, that the rather prohibitive approach of the traditional European data protection law have to be lowered in favour of a more flexible approach which underlines more the strengthening of data sovereignty of the individual. The challenges referring to the legal integration of artificial intelligence innovations will be very similar: to find a pragmatic approach for the integration of the new technology equilibrating the interest in keeping pace with the global technical and scientific development and the interest in preserving a ethic basis of the legal system. 22 Therefore, basically the civil law will have to integrate AI-acting in one way or another into the contractual law, the tort law, the law of legal entities and the competition law. As long as there is still some connection to a specific individual interest in the contract, in order to preserve the systematic connection between the legal instruments of personal freedom and the term of the legal intent, systematically rather a new category of contractual acting with special rules to protect the legal traffic 23 should be developed. The extension of the term of contractual will or intent by means of analogy is clearly a dangerous way which has the potential to result in an erosion of the fundaments of the private law and subsequently of the constitutional law.
A new category of contractual machine acting sui generis could be designed, more or less dependent on the requirement of control instead of will or intent. The more control an individual has over the acting of the machine, the more similar to the contractual will acting the new form of contractual declaration may be regulated. The more control the individual has, the more the machine acting may be considered an indirect expression of the will -and therefore of the freedom -of the individual in whose interest the machine is active.
However, once artificial intelligence entities are acting full independently from individuals, once they are acting merely in a general interest of the society or of institutions without direct or sufficient indirect control of human individuals, the assignment element of `control´ would step behind. This is connected with the basic decision on the question how much independence of machines a society can tolerate and wants to tolerate in the future. 22 A similar question arises in the patent law which may not, in its present status, sufficiently support new forms of autonomous AI-based `inventions´. Clearly, this finds its reason in the anthropocentric philosophy of the intellectual property rights as instruments of protection of the creativity aspect within the personality of individuals. Here -like in the contract law -the law is based on an individualistic concept. It seems difficult to analogize the term of the `inventor´ to full autonomous AI-inventory activities, without disrupting the philosophical foundation of the intellectual property rights. The legislative creation of an industrial property right sui generis for `inventions without inventor´ may be the better solution, see the detailed analysis by Tim W.

Development of a Regulatory Framework for AI in the European Union and in Germany
Already in 2017 the European Parliament issued a resolution proposing to the member states of the European Union to create a specific legal status for robots as `electronic persons´. 24 This resolution obviously had few impact on the legislative development of the member states as the acknowledgement of legal subjectivity remains within the exclusive competence of the member states. However, it showed that the artificial intelligence has come to the attention of the institutions of the EU. In April 2018 a European strategy communication `Artificial Intelligence for Europe´ was presented. This strategy seeks equibalance between the interest of encouraging the European technological and industrial capacity, the need for a proper preparation for socio-economic changes brought by AI and the duty to ensure Fundamental Rights within the EU. 25 In 2019 a communication of the European Commission followed, sketching a European AI strategy by defining seven key requirements for the social and ethical development and integration of AI. 26 Those key requirements are referring to -Human agency and oversight -Technical robustness and safety -Privacy and data governance -Transparency -Diversity, non-discrimination and fairness -Societal and environmental well-being -Accountability.
Finally, the Commission presented a White Paper on artificial intelligence in 2020 27 where background and scope of the planned regulative system and the possible adjustments to existing EU legislative framework 28 as well as questions of compliance and law enforcement, 29 governance 30 and a voluntary labelling scheme for AI 31 applications are discussed. On 20. October 2020 the European Parliament adopted the artificial intelligence package thus answering to the White Paper of the Commission. This package contains three parts: -A report on a framework of ethical aspects of AI, robotics and related technologies, 32 a report with recommendations to the Commission on a civil liability regime for AI 33 and a report on Intellectual Property Rights (IPRs) for the development of AI technologies. 34 The report on civil liability which develops different liability rules depending on the specific risk of the use of artificial intelligence is especially interesting for the private law. It is proposing a strict liability for any damage caused by artificial intelligence operating certain high risks. 35 A legislative proposal of the European Commission is expected for the first months of 2021. However, politically the regulatory approach of the European Commission is recently disputed as in October 2020 several EU member states proposed an alternative `soft law approach´ in order to avoid barriers to the development of AI technologies. 36 Also, in the German law development important upcoming amendments of the German national Antitrust Act (GWB) are to mention. Germany is among the first countries which is regulating algorithmic influence on the competition. Already the 9 th amendment to the German Antitrust Act was adapting the law to the growing digitalization of markets. It made it possible for the competition authorities to take into account network-effects in markets involving multisided platforms like search engines or price comparison platforms which may lead to market concentrations ( § 18 par 3a GWB). Additionally, it allowed to define a relevant product market even if no money is transferred between immediate participants ( § 18 par 2a GWB), which is for example the case in price comparing portals or internet searching engines. On 29. October 2020 the German parliament Deutscher Bundestag debated on a draft of the federal government for a `GWB-Digitalisierungsgesetz´. 37 With this draft the German government aims to regulate abuse of market dominance by platform enterprises building a closed `digital ecosystem´ and using it to gain cross-market power, for example supported by algorithms (algorithmic pricing) or the use of their own big data stock. The new legislation seeks to give competitors non-discriminatory access to data and information in order to keep markets open and support innovation. This will be especially relevant for platforms which combine social media services (data collecting) with advertisement and/or online shopping business. It is also an answer to a tendency within the e-commerce that strong enterprises may act as competitors and at the same time as service suppliers for competitors offering their already widely established platform for other small sellers. 38 The antitrust law has a significant loophole here as the recent activities of the European Commission against certain discriminatory aspects of the online shopping platform Amazon regarding its marketplace show. 39 Obviously the use of artificial intelligence in the business-to-consumer relation is playing a significant role in strengthening and possibly abusing market dominance of such platforms.

B. Conclusion
If AI legal acting without any human control or human interference is accepted, how the problems of damages caused by the acting are handled? Contracting by individuals is connected with pre-contractual mutual duties, for example, regarding important information on the item or other circumstances which can be relevant. Who is responsible if an algorithm following its programming interacting with an individual is not disclosing all necessary information? Machines cannot act responsible. Does the use of machines lead to the extinction or diminishment of the meaning of responsibility in the law? This may mean exactly the mentioned paradigmatic shift from a socially integrated civil law to a utilitarian functional law which is separated from its ethic-philosophical basis.
In the middle of the last century the US Science-Fiction author Isaac Asimov described a society which integrates robots, subject to robot laws transforming the freedom of responsibility of the machine to the human-centric interest by giving the robot the unviable order to protect the individual and collective human life, instead of, on the first sight, including artificial entities themselves into the ethic or ethicincentive system of duty and intent in human societies. 40 The AI in this fiction isthough maybe able to act wishful and independent -limited by own laws. This shows the ethic paradox as the compliance with laws needs the ability to wish to comply, whereas a programming of acting limits to the AI in this sense would not be `law´. 41 However, we are not this far yet. Nevertheless, we need clear rules for the question who is to be held responsible for acting of artificial intelligence now and in the next 38 This may lead to interest collisions i.e. as the platform enterprise may gather customer information and other business data of the independent sellers using the platform and may use those information in the own sales activities.

39
See press release from 10. November 2020 https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2077 (30.11.2020). 40 These laws are: 1. "A robot may not injure a human being or, through inaction, allow a human being to come to harm.", 2. "A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." 4. "A robot may not harm humanity, or, by inaction, allow humanity to come to harm.", see Isaac Asimov, Runaround, in: I Robot, New York, 1950, page 40. 41 A further philosophical question is to argue, why a feeling and suffering entity should be treated in a different way than a human. This would lead to the validity of absolute humancentric rules, partly deducted from a theological thinking. steps of the technical development. Rules for risk and responsibility allocation between the AI-agent, individuals and the society have to be elaborated and also new solutions requiring the existence of an individual `representant´ for any AIsubject.
The recent legislative steps of the European Union institutions encouraging the development of a regulatory framework for the integration of artificial intelligence covering all social, technological and economic areas and the awaited draft of a legislative proposal are important measures to adapt the legal situation to the upcoming challenges. However, those steps will not make obsolete the discussion on how artificial intelligence can be integrated dogmatically consistent into the system of the national private laws beyond the amendment of the European regulatory system.