In the last year, we, as humanity, have created the technology to pass the Turing test. If one is optimistic, is possible to say that humans and machines have the same capacity of writing quality content, but if you take a more realistic approach, machines are better. Despite the big advancements with both their threats and opportunities, we still do not understand how deep learning works. It can be said that we created the alien technology that we feared for so long: it surpasses us in terms of intelligence (defined as the ability to solve tasks) and understands and exploits us, but humans can not understand it.

In the context of this new technology, a flood of text AI bots could mean the end of democracy, a flood of generated art could mean the end of knowing human creativity and a flood of generated music could mean the end of understanding someone’s feeling but an exploitation of what we like. We already witnessed increased problems due to the recommendation algorithms of social media, which influenced the ability to learn and focus on millions of people. I strongly believe that at this point we can not imagine the implications and effects of the new technology on humans.

As Frank Herbert said, the reformers have always caused more damage than any other force in human history, when they try to completely change the course of things. On the contrary, one should focus on finding the natural course of things, follow its trajectory, and influence it in a positive way. In this line, the powerful new AI technology is here to stay, and despite the threats, we need to use it, understand it, and create a better world.

In contradiction with the concerned beginning, I strongly believe that AI could be the best and most interesting technology developed by humans so far. It is the most advanced intelligence we created to solve different kinds of problems. It is used not only to empower individuals but also to reduce suffering in key areas where human intelligence seems to reach its limitation or move too slowly. It can already improve day-to-day life in a meaningful way, accelerate the development of drugs, create simulations for cancer patients with different types of treatment and the list gets longer and longer.

In this line of thought the importance of Explainable AI (XAI) is growing. How can we create systems that use the power of deep learning, while also allowing humans to understand most behind the scenes and influence their behavior? These kinds of systems are important in order to expand the current use of deep learning to critical areas such as journalism or medicine while keeping the responsibility and decision on the human side. One of these initiatives is the tool presented in the following pages: MindBugs Discovery, a tool for fact-checking journalists.

Graph subset: one statement analysisMindbugs Discovery arhitecture

The MindBugs Discovery aims to create an interactive environment for visualizing the hidden structure of misinformation. In a world where reality and fiction merge, a scalable system with verified information is beginning to be a necessity. The tool is designed for journalists to help them ease their work and also for the general public who has the curiosity of looking into disinformation techniques and propaganda. It is an initiative that wants to gather the sparse work of fact-checking organizations into a single place, in order to help people understand and easily identify the fake information which they encounter. It is a centralized space and a search engine, where the work and knowledge of human specialists are provided.

Deep dive into the technical implementation of Mindbugs Discovery

Eu Flag
Ai4Media

The MindBugs Discovery has indirectly received funding from the European Union’s Horizon 2020 research and innovation action programme, via the AI4Media Open Call #2 issued and executed under the AI4Media project (Grant Agreement no. 951911).

Any promotion made by the beneficiary about the project, in whatever form and by whatever medium reflects only the author’s views and that the EC or the AI4Media project is not responsible for any use that may be made of the information contained therein.