Understanding Institutional

Understanding Institutional

UNDERSTANDING INSTITUTIONAL (ArtificialAI Intelligence) SECTORAL CASE STUDIES FROM INDIA By Anulekha Nandi Understanding Institutional AI: Sectoral Case Studies from India Working Paper February 2020 This work is licensed under a creative commons Attribution 4.0 International License. You can modify and build upon this document non-commercially, as long as you give credit to the original authors and license your new creation under the identical terms. Author: Anulekha Nandi Design and Layout: Satish Kumar You can read the online copy at www.defindia.org/publication-2 Digital Empowerment Foundation House no. 44, 2nd and 3rd Floor (next to Naraina IIT Academy) Kalu Sarai (near IIT Flyover) New Delhi – 110016 Tel: 91-11-42233100 / Fax: 91-11-26532787 Email: [email protected] | URL: www.defindia.org Understanding Institutional AI: Sectoral Case Studies from India 2 CONTENT Introduction 4 The AI policy landscape in India 5 Proposed data governance framework 6 Selection of cases 7 National and international landscape on AI policy, data governance, and corporate accountability 8 Artificial intelligence in education in India: Questioning justice and inclusion 11 - Introduction 12 - Institutional AI in the Indian technology and education paradox 12 - Identifying parameters of algorithmic decision making and its implications for justice and inclusion 14 - Conclusion 18 - Action steps 19 Building data architectures for AI driven efficiency in healthcare: Identity, ownership, and privacy 21 - Addressing institutional and capacity gaps in healthcare 22 - Trends in AI adoption 22 - Creating data architectures for integrating AI in healthcare 24 - Identity, ownership, and privacy 26 - Conclusion 27 - Recommendations 28 Automated decision-making systems in law enforcement: On the need to balance between privacy and security 29 - Prevalence of AI use in law enforcement: Nature and types of application 30 - State profiles – Rajasthan, Punjab, Delhi, Uttar Pradesh 30 - Other profiles of deployment 33 - Navigating privacy and security: Considerations on interoperability, data fusion, and operational due diligence 34 - Conclusion 36 - Recommendations 36 Findings 37 - Differences 37 - Commonalities 38 Conclusion 41 Recommendations 43 Understanding Institutional AI: Sectoral Case Studies from India 3 INTRODUCTION ndia ranks 3rd in the Asia Pacific region on Artificial Intelligence (AI) readiness after Singapore and Hong Kong according to a ISalesforce study1. India ranked 3rd among G20 countries in 2016 on the number of AI focused start-ups which have increased at a compound annual growth rate of 86% higher than the global average2. According to a study by Accenture, AI has the potential to add $957 billion or 15% of the gross value added to India’s economy by 20353. Though the AI market in India is estimated to grow to $89.8 billion by 2025, it continues to be underinvested by risk capital investors compared to competition in the US and China4. However, the Indian ecosystem has its limitations in providing an enabling environment for AI due to lack of quality data and reliable datasets. Despite these limitations coupled with challenges of India’s demographic and linguistic diversity, venture capitalist firms to report that 25-30% of the proposals they received contain a significantAI component5. Further, companies that have managed to raise capital and establish their business are seeing positive traction and expansion not just domestically but also internationally6. Alongside the proliferation of proprietary technology and start-ups incorporating AI in their service delivery there are institutional applications of AI through public-private partnerships (PPPs) that aim to extend the application of the technology towards improving social sector outcomes. While in some sectors such deployments have taken the course of pilots through PPP arrangements, in others they have moved towards policy alignment and laying the foundations to springboard such interventions, while in some others it has moved towards significant adoption and deployment. One of the recurring felt needs articulated by different policy documents is the lack of data infrastructure/architecture on the back of which AI applications can be deployed and scaled. Sectoral policy documents have articulated different forms of data integration and database linkages however, the unified idea seems to be that of IndiaStack undergirded by Aadhaar and the proposed JAM trinity7 8. Given the direction of sectoral policies to interlink and aggregate databases under their purview this has the potential to provide a data driven 360 degree view of an individual within administrative institutions. However, this potentiality is not circumscribed by legal safeguards that can provide either sectoral or general purpose protections against misuse and move towards a standard measure for transparency, accountability, safety, security, and due diligence in protecting individuals from risk of harms. 1EC Bureau. (2019). Indian third in APAC on AI readiness. Economic Times. Retrieved from https:// economictimes.indiatimes.com/small-biz/startups/newsbuzz/india-third-in-apac-in-ai-tech-readiness/ articleshow/68805284.cms?from=mdr [16 Feb 2020] 2Sachitanand, R. (2019). Here’s why Indian companies are betting big on AI. Economic Times. Retrieved from https://economictimes.indiatimes.com/tech/internet/heres-why-indian-companies-are-betting- big-on-ai/articleshow/67919349.cms?from=mdr [16 Feb 2020] 3Ibid. 4Ibid. 5Ibid. 6Ibid. 7TAGUP. (2011). Report of the Technology Advisory Group for Unique Projects. Retrieved from https://www.finmin.nic.in/sites/default/files/TAGUP_Report.pdf [28/11/2018]. 8The JAM trinity refers to triangulation of data through the Aadhaar database, mobile numbers, and Jan Dhan social protection scheme for financial inclusion. Understanding Institutional AI: Sectoral Case Studies from India 4 THE AI POLICY he Indian digital policy and regulatory landscape is awash with initiatives to mainstream (AI) in key social and economic LANDSCAPE IN Tinfrastructures. The National Strategy for Artificial Intelligence released by NITI Aayog (the Indian government’s policy think-tank) in INDIA June 2018, underscores the government’s policy intent to mainstream AI in healthcare, agriculture, education, smart cities and infrastructure, and smart mobility and transportation9. The Artificial Intelligence Task Force (AITF), constituted by the Ministry of Commerce and Industry, has identified 10 focus areas to use AI to work on institutional gaps plaguing those sectors. These include manufacturing, FinTech, healthcare, agriculture/ food processing, education, retail/ customer engagement, accessibility technology for differently abled, environment, national security, and public utility services10. The Ministry for Electronics and Information Technology constituted four committees for promoting AI initiatives and developing policy framework: (1) Committee A on platforms and data for AI; (2) Committee B on leveraging AI for identifying national missions in key sectors; (3) Committee C on mapping technological capabilities, key policy enablers required across sectors, skilling and re-skilling, and R&D; (4) Committee D on cyber security, safety, legal and ethical issues11. Committee B identified 17 key sectors that would require national AI missions for infusing technology enabled efficiency12. These multiple initiatives by different government departments highlight the deepening ethical, legal, and technological complexity of India’s chequered artificial intelligence landscape. It also demonstrates a move towards adoption, uptake, and deployment of AI in critical social, economic, and governance sectors by government and administrative institutions. 9NITI Aayog. (2018). National strategy for artificial intelligence. Retrieved https://niti.gov.in/ writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf [16 Feb 2020]. 10DIPP. (n.d.). Report of task force on artificial intelligence. Retrieved from https://dipp.gov.in/whats- new/report-task-force-artificial-intelligence [16 Feb 2020]. 11MEITY. (n.d.). Artificial intelligence committee reports. Retrieved from https://meity.gov.in/artificial- intelligence-committees-reports [16 Feb 2020]. 12MEITY. (2019). Report of committee B on leveraging AI for identifying national missions in key sectors. Retrieved from https://meity.gov.in/writereaddata/files/Committes_B-Report-on-Key-Sector. pdf [16 Feb 2020]. Understanding Institutional AI: Sectoral Case Studies from India 5 PROPOSED he current data protection framework in the shape of the Personal Data Protection Bill, 2019 which presently stands referred DATA Tto a Joint Committee does not contain any specific reference to automated decision making. While the risk of harms delineated GOVERNANCE therein can be extended to provide protection against risks generating FRAMEWORK out of automated decision-making their application would remain the prerogative of the Data Protection Authority whose appointments are at the discretion of the Central Government. Given that the Central Government also has the authority to exempt any agency of the government from the application of the Data Protection Act, this undermines the independence with which the Authority should be able function and act as an independent oversight body, particularly in cases of mass data collection for intelligence and law enforcement purposes. Compounding this are the significant carve outs for non-consensual processing (section 12) for government

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    45 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us