ai intelligence

Evolving laws pertaining to artificial intelligence in various jurisdictions

Artificial intelligence (AI) and its development is the current hot topic and its preparation and use are moving forward speedily and is contributory to the world economy. AI has several advantages for example, enhancements in
creative thinking, services, safety, lifestyles, serving to solve problems and nonetheless at an equivalent time, raises several anxieties and considerations and has adverse impacts on man’s autonomy, privacy, and elementary rights and freedoms. The higher than trend primarily issues the formation of technologies that may radically change the property laissez- faire economy, forcing professionals out of different areas. The technological paradigm of the digital economy forms new markets that make to new restrictive measures and subjects for management, together with AI. The increasing legal concerns of AI are ever present but cannot be underestimated at any point of time.

  1. Existing laws governing AI
    The European Union being the most active in proposing new rules and laws, includes projected rules in seven out of 9 classes of areas wherever regulation may well be applicable to AI. On the opposite hand, the United
    States maintains a “light” regulative posture with the involvement of laws around AI. In Russia, there is a similar to the EU legislation bill, called the Grishin Law (2015), under Russian Parliament consideration. The draft law introduces amendments to the provisions of the Civil Code of the Russian Federation, and regardless of the robot’s autonomy, imposes all the responsibility on the robot’s developer, operator, or manufacturer, and includes issues of the robot’s representation in court, supervisory agencies, etc. The development, adoption and promotion of AI has been visibly high on the list of priorities of the Indian Government, an approach that rests on the premise that AI has the potential to make lives easier and make society more equal2. The Union government in 2018 allocated substantial funding towards research, training and skilling in emerging technologies like AI, a100% increase from previous investment.
  1. Persisting ambiguity in AI Laws
    “With great power comes great responsibility”, as quoted during the French revolution which became essentially responsible for bringing a law binding form of government, has never been more befitting than in the current AI
    revolution. The increased growth in AI, governments around the world are trying to stay abreast of these developments and make sure that existing laws and regulations stay relevant as new challenges arise.
    Key legal issues creating hindrance between the already existing AI pertaining laws and with the advancing growth of AI are Lack of algorithmic transparency and contestability, Cyber security vulnerabilities, Unfairness,
    bias and discrimination, Legal personhood issues, Intellectual property issues and Lack of accountability for harms.
    3.1 The lack of algorithmic transparency and contestability
    The lack of algorithmic transparency4highlights given the proliferation of AI in high-risk areas that “pressure is mounting to design and govern AI to be accountable, fair and transparent.”Transparency has its limitations
    and is often portrayed as inadequate. The European Union Parliament STOA study made public, numerous
    policy choices to control algorithmic transparency and responsibility, supported associate degree analysis of the social, technical and restrictive challenges; every choice addresses a completely different side of the algorithmic transparency:
    A. Creating rising: education, watchdogs and whistle blowers;
    B. Responsibility in public-sector use of algorithmic decision-making;
    C. Restrictive oversight and legal liability; and
    D. International coordination for algorithmic governance.

The lack of contestability – in relation to algorithmic system, i.e., the “lack of an obvious means to challenge them when they produce unexpected, damaging, unfair or discriminatory results”.

3.2 Cyber security vulnerabilities Complete machine-controlled decision-making resulting in expensive
errors and fatalities; the utilization of AI weapons without human intervention; problems associated with AI vulnerabilities in cyber security; however the appliance of AI to police investigation or cyber security for national security opens a brand new attack vector supported data diet vulnerability’; the utilization of network intervention ways by foreign-deployed AI; larger scale and additional strategic version of current advanced targeting of political messages on social media etc are some issues of AI related to Cyber security vulnerabilities.

3.3 Unfairness, bias and discrimination
A Council of Europe study outlines that the law leaves shortfalls where it does not extend to address what is not expressly protected against discrimination by law, or where new classes of differentiation are created and lead to biased and discriminatory effects. While the law clearly regulates and protects against discriminatory behaviour, it is suggested it falls short.

3.4 Legal personhood issues and Intellectual property issues
The High-Level Expert Group on Artificial Intelligence (AI HLEG) has specifically urged “policy-makers to refrain from establishing legal personality for AI systems or robots” outlining that this is “fundamentally inconsistent with the principle of human agency, accountability and responsibility” and poses a “significant moral hazard”. While AI legal personhood may have some emotional or economic appeal, so do many superficially desirable hazards against which the law protects us. Many intellectual property rights issues have not been addressed and/or answered conclusively and current regimes have been seen as “woefully inadequate to deal with the growing use of more and more intuitive artificial intelligence systems in the production of such works” Davies. There is need further research and exploration especially as AI advances further and it becomes increasingly difficult to identify the creator.
3.4 Lack of accountability for harms
Accountability gap is a worse problem than it might first seem causing problems in three areas: causality, justice, and compensation. As a Privacy International and Article 19 (2018) report states, “Even when a potential harm is found, it can be difficult to ensure accountability for violations of those responsible.”

  1. Current scenario of AI Laws
    According to the International Data Corporation (“IDC”), the AI market is expected to hit $35.8 billion this year, which represents an increase of 44% since 2018. IDC has also projected global spending on AI to double
    by 2022, reaching $79.2 billion.Governments are currently updating area unit changes in their privacy legislation to answer to privacy issues burning by the general public outcry against huge information breaches
    and therefore the unshackled use of information by giant corporations. customers have become more and more involved with the potential misuse of their personal data.
  2. Conclusion
    AI will continue to develop and to disrupt society in ways that we cannot yet imagine. It is challenging to keep pace with the speed of developments with which AI systems are being deployed. One developer recently
    described it like “a sort of peanut butter you can spread” across multiple disciplines and industries.

Leave a Reply

Your email address will not be published. Required fields are marked *