Abstract [eng] |
The Master's thesis analyses automated individual decision–making, including profiling (Article 22 GDPR), and the issues related to the application of this Article to data controllers using AI in their activities. These issues are analysed by examining soft–law sources, different positions of authors in the legal field and interpretations in case law. The work clarifies the concepts of profiling and automated decision–making. The data subject's right not to be the subject to automated decision–making (Article 22(1) of the GDPR) is analysed by presenting the problem of the ambiguity of this right and the conditions and limits of its application. The work also analyses the issues related to ensuring adequate safeguards for the protection of the rights, freedoms and legitimate interests of data subjects when decisions referred to in Article 22(1) of the GDPR are taken by using AI. This issue is most pronounced when sophisticated AI technologies are used, where the limited ability of a human being to process large amounts of data may make it difficult to ensure the right to request human intervention or to challenge the decision. The paper also examines the issue of the right to obtain an explanation, concluding that, if such a right were to be recognised as mandatory, data controllers would have to choose between not using sophisticated AI systems (which cannot be explained) or bearing the risks of fines arising from the GDPR. In the context of the issues under examination, it is concluded that the right to obtain an explanation is not formally mandatory, but based on the principles enshrined in the GDPR, data controllers should, in practice, give effect to this right in certain cases. |