Artificial Intelligence and Government
Artificial intelligence (AI) had a remarkable growth, probably being a qualitative leap in the already outstanding technological development that has been going on since the sixties. In general, this growth is due to the success of natural language processing (Alexa, chatbots) and, above all, of image recognition. It is also accompanied by a hype that makes most technology companies have on the front page of their website the acronym AI.
I would dare to say that technological development in recent decades has no comparable historical antecedents since the industrial revolution, and just like it did back then, it has influenced many aspects of economic growth. This can lead to believe that technology has such power that inevitably, will happen what technology (and the companies that owns the technology) decide. I think this is a myth, since growth depends more on what governments and society do to decide the role that technology and knowledge will play in the future. In other words, the path to growth is a political decision, and AI is no exception. Here I will try to reflect on how a government could incorporate the AI approach into its management, which is much more than technology or computer tools. One way to do so is by buying software and management packages that include (or claim to include) AI in their development, but not in their use. These solutions work in cases where the technology is already fully established (license plate recognition), but can fall far short of expectations (a chatbot that ends up being less useful than putting up a web page with frequently asked questions), or even dangerous (software that recognizes and classifies people, and makes automatic decisions based on them). In the best case scenario, these types of solutions are nothing more than an incorporation of technology, something that can be positive but not new at all. Going a step further and incorporating AI (or Data Science in general, including another type of analysis) is possible, and probably not very complicated if, what we understand is that it should be a general policy, and not a technological approach. Here are some ideas: - The first thing is to understand what you want to do. It sounds like a truism, but I myself heard an IT manager say "I already have my AI platform up and running, now I have to think what to use it for." It does not work like this: the decision of what to do is still political or managerial, and must be devised by whoever has to solve the problem, and not by who must build the solution: «I want that when a message arrives, it automatically recognizes which office should serve you». Also, we have to know how to do it: AI is not magic, and it is important to understand what the methods and their scope are. Every manager needs to understand the scope of AI and it requires training. - It is important to know what data we are working with and are interested in. In order to apply AI, data is invariably needed: claims history, indicators and their evolution, list of queries made in our service center, most visited web pages, socioeconomic indicators by area, etc. Data is everywhere today, and choosing which ones you need depends a lot on the prior definition of the implementation objectives. - Once we know what we want to do, and have the required data, we need to define how to do it. The view of AI as a black box is a simplistic idea widely used by technology companies to sell solutions, and it is generally false. The work of deploying an AI solution requires constant adjustments between those who define what to do, the domain expert (who knows the problem and the data) and the AI expert (who knows the models). If one is missing, it is likely that nothing will be achieved, or we will have a solution that is ultimately not useful. This work must necessarily include ethical aspects, if you do not want to fall into solutions that reproduce or introduce biases that cause more problems than those you want to solve. - Finally, if we really want a qualitative leap, we should publish data and models in open formats, betting on research and collaboration. The success of data science is largely a result of the efforts that have been made in the field of open data and the consolidation of open source software in these research communities. The fact of having this in mind can make a huge difference, and a must-have, in an area that grows in both data and knowledge, and where large Internet companies are capturing talent at a rate never seen before. For developing countries such as the ones in Latin America, the challenge is even greater, although it presents an opportunity. Collaboration as a strategy can open many avenues. We must follow and support ABRELATAM experiences, the Khipu or LXAI meetings of AI researchers in the region. Another opportunity that can contribute to this leap in the region is to multiply initiatives such as EmpatIA, organized by ILDA, in which I had the honor to participate as a jury. Integration in Latin America has always been an unsolved problem, and perhaps there is a new opportunity here.