<<

SIPA’s Entrepreneurship & Policy Initiative Working Paper Series

Entrepreneurial Solutions to Online : Seeking Scale, Trying for Profit By Anya SchiffrinI

Executive summary to be tackling the problem of removing dangerous/ille- gal content but are universally resented because they As worries about dis/misinformation online and the proft from the spread of mis/disinformation and have contamination of public discourse continue, and inter- been slow to respond to warnings about the danger- national regulatory standards have yet to be developed, ous efects it is having on society. Facebook has blocked a number of private sector companies have jumped tools promoting transparency of advertising on the into the breach to develop ways to address the problem. site.1 Facebook has consistently refused to release data Most use a combination of people and natural lan- to researchers seeking to measure the impact of expo- guage processing to try and identify false information sure to disinformation. online. Nascent and not yet widespread, the frms hope to survive by fnding commercial applications for their In this fragmented universe of solutions, characterized products or by being adopted by the social media plat- by a lack of comprehensive platform regulation, a num- forms. Most of the 13 startups we looked at fall into two ber of small private sector companies have proposed categories: “bootstrap” operations with small amounts ways of solving the problem. The aim of this report is of funding, or slightly larger enterprises that have raised to understand what solutions they are developing and about one-two million dollars of capital. Whether any outline some characteristics of the sector and points in of these frms will be able to scale is not clear. common. We also tackle the question of whether their proposed solutions will be efective in the absence of widespread adoption by the giant platforms. Introduction We carried out more than 20 in-depth interviews Many solutions are being tried to combat the alarming between December 2018 and February 2019 and this spread of mis/disinformation online. To name just a report includes short profles of 13 frms. We identi- few: governments around the world are imposing regu- fed the companies by asking experts in the feld for lations as to what can and cannot be said online, foun- suggestions which we then cross referenced as well as dations are funding fact-checking initiatives and eforts by reading press reports. Most interviews were con- to build trust in media. Journalists are trying to build ducted on the phone but some were done in person. relationships with their audiences and strengthen local We asked the companies about their technology and news reporting so as to provide societies with credible how it works, their business models, annual revenues, and useful information. Journalists also track the peo- and their plans to scale as well as their thoughts about ple and governments who create and spread false news government regulation. and . Facebook, Google and Twitter claim

I. Anya Schifrin is the director of the Technology, Media, and Communications specialization at Columbia University’s School of International and Public Afairs.

______Entrepreneurial Solutions to Online Disinformation: Seeking Scale, Trying for Profit"| 221 Some general observations about the companies • Mostly the frms did not set out to crack the we studied: problem of dis/misinformation online. Many were involved in other activities and came across • The companies more or less fall into three mis/information and decided to do something categories a) detection and mapping b) curation about it, notes Alejandro Romero of Alto c) certifcation. Analytics, a data analysis frm based in Madrid • There is disagreement as to whether natural with presence in UK, US and Brazil. language processing will be able to identify all Many have day jobs and core businesses. Many have ties forms of mis/disinformation online. Therefore, to intelligence and government, cyber security, fraud most of the outlets use a combination of people detection. Some are tiny (Eric Feinberg from GIPEC and natural language processing to try and is just one person) some are large and well established. identify false information online so it can be taken down. “All the companies working in this space were working • Nascent and not yet widespread, these on something else and the disinformation they saw had companies hope to survive by fnding an impact on what they were doing. They saw a busi- commercial applications for their products ness opportunity and thought this could be an interest- or by being adopted by the social media ing learning. I’ve not seen a company that started up platforms. Brandwatch and Alto have just to fght disinformation,” Romero said. existing businesses and this new area or The startup executives we interviewed assume that new application of their technology is not widespread regulation of online hate speech and the critical for their business continuity. platforms is not imminent and that consumer and cor- • Most of the 13 startups we looked at fall into porate demand for their products will continue. They two categories: bootstrap operations with small note that Facebook and its afliates, such as YouTube, amounts of startup funding or slightly larger and Twitter have no incentive to change a business frms that have raised about one-two million model that is based on generating outrage and engage- US dollars of funding. Some, like Brandwatch ment. In this scenario, without regulation the plat- and Alto, are not startups. Alto has more forms will continue to allow, or even encourage, the than 100 employees and Brandwatch was circulation of falsehoods online. founded in 2006. Brandwatch has 64.7 million The platforms could do more to fix the problem but dollars in funding.3 have no incentive to do so. These firms exist because of a lack of action by social media companies that have no financial incentive to Disinformation is false information spread deliberately fix the problem themselves and because government to deceive. The English word disinformation is a loan regulation addressing the problem has not yet been translation of the Russian dezinformatsiya, derived from passed. Or the regulation that does exist, i.e. section the title of a KGB black propaganda department.” 4 230 of the Communications Decency Act, protects the social media platforms from being liable for what Claire Wardle, executive director of First Draft, they put on their platforms. Many of the compa- has further broken down the defnition into types of nies we spoke to said that Facebook, YouTube, and dis/mis information citing a number of scholars Twitter have been extremely lax and irresponsible in who have also tried to do so. They have provided a allowing hate speech and disinformation to contam- taxonomy of the diferent kinds of mis and disin- inate their networks. Many believe that, technically, formation (including, for example, satire and false it’s not a hard problem to fx but they note that there context as well as imposter content and fabricated is no incentive for Facebook or Twitter or YouTube content) and also made taxonomies of who the to do so. “If the platforms try to tackle the problem diferent actors and targets are. internally it will be a huge revenue loss for them so

222 |"SIPA’s Entrepreneurship & Policy Initiative Working Paper Series ______they don’t have an incentive to do it,” predicted Sagar platforms and some other groups, like the United For Kaul, one of the founders of Metafact, a fact-check- News coalition, hope to persuade advertisers to push ing platform based in India. the platforms to do a better job of screening ads. “The online disinformation is a lot like the spam prob- The idea of pressuring advertisers to pressure the plat- lem. And it could probably be solved the way we solved forms is an attractive one. Since most of the platform the spam problem. We solved spam with a combina- revenue comes from advertising it seems that they tion of legal and technical measures. First, Congress would be most susceptible to pressure from advertis- passed a law (the CAN-SPAM Act) 2 that imposed fnes ers. Journalists also hope to promote direct buying of on the senders of spam. Faced with liability, the email advertising rather than programmatic so that quality industry then set up a “black list” of known spammers media can beneft from advertising revenue rather than that they all agreed to block. see it go to random websites. Similarly, if there was some legal or fnancial cost to It is worth noting that advertisers would themselves be the platforms, they would likely set up a “black list” damaged by government bans on micro targeting or of disinformation outlets to block. But they currently the spread of regulation aimed at protecting privacy have no incentive to do so,” says leading technology and so pushing the platforms to take action would journalist Julia Angwin. Mark Zuckerberg has called protect both sectors (platforms and brand advertisers) for global regulation but observers note that his com- from regulations they don’t want. Groups like United ments are belied by the amount of time and money for News also hope to protect the quality of journalism. that Facebook and other tech giants spend lobbying to Regarding United For News, publisher John Battelle avoid regulation. says: “I am a huge fan of the philosophy. Direct media “The platforms are extremely good at making soothing buying supports what I think is the most important part noises. They are silver tongued and very good at what of publishing—the direct relationship between pub- they do. Their response is ‘We are hiring 20,000 peo- lisher, audience and marketer. Programmatic adver- ple. We are all over this problem, we have a commu- tising has stripped away the context of an experience nity that will fag false news.” Facebook’s point of view with the audience. Advertisements go into unknown is ‘we got this. We acknowledge the problem. We are places, out of context. Direct advertising will re-estab- in the 12 step program. But they haven’t admitted the lish the connection with the audience. The problem frst step: There is a power beyond ourselves. Facebook is that programmatic advertising is cheap for adver- has never acknowledged this problem is bigger than tisers and cheap for publishers to implement, so it’s them,” said John Batelle.” irresistible—especially for the advertisers, who control the ecosystem.” Some firms try to work with advertisers to pressure the platforms. What is their business model? In the absence of regulation or financial incentives, While some of the frms in this feld have been around there is no reason for the platforms to remove mis/ for years, others are nascent and small. Many got help disinformation online. Therefore many of the startups from friends and family as well as small startup grants hope that reputational risk and naming and shaming and are now busy trying to develop and launch their will prove effective. They are working with advertisers tools or scale. Metafact was started with funding from to see if the advertisers can push the platforms to take Hanyang University in partnership with Seoul Metro- more action. politan Government while others, like the Global Dis- Assuming that big and respectable brands will not information Index, relied on foundations. A few, like want to run their advertisements next to unsavory con- Brandwatch, have been around for years and have a tent, a number of the startup founders we spoke to core activity of monitoring the web for corporate cli- expect that reputable corporate advertisers will push ents. Yet some, like Vett News, consider fghting disin- for change and force platforms to crack down on hate formation to be a part of their core activity. NEVA is speech online as well as dis/misinformation These doing facial recognition and has an ongoing business

______Entrepreneurial Solutions to Online Disinformation: Seeking Scale, Trying for Profit"| 223 How the Funding Landscape Works An interview with Publisher John Batelle, a serial entrepreneur and investor in media and technology

In an interview, John Battelle explained the way these “These investors don’t want to put in money at a two types of startups get funding. Some angels are willing million dollar valuation, only to get out at a three mil- to put in amounts up to one-two million dollars, but it’s lion valuation. This means the innovation space has unlikely that large investors will put in lots of money become a desert, blocked by the large platforms which because they would want to see major returns. have a monopsony over demand for acquisitions….a “It’s not hard to get some money at this stage, but rais- functioning market should have a fourishing eco sys- ing a seed round isn’t proof of much. A sophisticated, tem of innovation. What we have here is a market fail- later stage investor would say ‘I am never going to put ure because of an oligarchy.” a lot of money in this feld because it’s controlled by the Without acquisition by the big platforms, then Bat- big companies.’ If Twitter or Facebook were looking for telle thinks the startups dealing with disinformation new ideas—for example, how to identify fake news using won’t scale. “Entrepreneurs often say ‘this time is dif- images—then they could acquihire (buy an early stage ferent,’ but then again, that’s what founders of new company for the people) paying half a million or so per companies always say. They have a new technology, engineer. Or, they’ll just copy the technology. There’s and it may or may not appeal to the platforms. And no reason for a VC to put $10 million into a company it may appeal to the other parts of the market that with that profle. Institutional VCs know that should the matter, such as consumers who could put an extension large platforms identify these seed stage companies as into their browsers. These are people who are moti- doing anything useful, they’ll either copy it, or acquire vated not to put fake news in their lives. But that’s a it for not much money. They certainly won’t depend on long shot. You can’t build a company based on a web a third-party technology. VCs who want to put a lot of browser extension.” money to work won’t win in this scenario,” Battelle said.

that will fy on its own. NewsGuard is a product for fcult and something you can proft from. Companies journalists to do their job better and is seeking funding that fgure out how to do that will make some money from the platforms. and the rest of the frms will have a hard time scaling,” Some like Truepic, hope for commercial applications said Justin Hendrix from NY Media Lab. for their technology. Truepic verifes photographs Disagreement as to how much screening of taken with its technology and assigns them a unique dis/misinformation can be done without people number stored in the cloud. This will not only help gov- ernments and human rights groups and media orga- There is some disagreement among the founders as nizations that need to verify, say, pictures of atrocities to how much of the screening of mis/disinformation or human rights violations but could also be extremely can be automated and how much can’t. Can Artifcial useful for insurance companies that are vetting claims Intelligence (AI) and natural language processing iden- and rooting out fraud. tify all or most of the bad stuf? The people we inter- viewed were split on this question. Some argue that There may be a way for some of these frms to gener- people are an essential part of the process because so ate revenue from the technology they have developed many of the sites are designed to mislead and look like but only a few are likely to scale. real news sites. As a result, it’s almost impossible for a “There are probably a handful of scalable ones that computer to recognize all the diferent characteristics use machine learning and natural language processing. of these kinds of sites. Extracting information from video and images is dif-

224 |"SIPA’s Entrepreneurship & Policy Initiative Working Paper Series ______Can this problem be solved by technology ? An interview with NYU Professor Danny Rogers

Danny Rogers’ background is in intelligence and “High quality sites will be largely impossible to dif- the dark web. His day job is working on combatting ferentiate. A computer will have a really hard time fraud and identity theft using dark web intelligence. computing the diference between Breitbart and Developing the Global Disinformation Index is a labor CNN. Breitbart is very nuanced and you have to of love which he hopes will be used by advertisers who look at it from a journalistic perspective not com- want to avoid placing their advertisements next to putational. The junky misinformation has lots of disinformation, as well as by platforms and other tech hallmarks such as spelling errors, recycled mate- companies to help de-platform disinformation eforts. rial, it’s often presented on a Word Press template. “I don’t think automated natural language processing All of this has signatures that you can identify.” is scalable. Computers are not going to be able to dis- One unresolved question is how to defne the problem tinguish content that is designed to fool humans.” so as to get bi-partisan consensus. “Using the word ‘dis- Rogers distinguishes two kinds of disinformation and information’ makes Conservatives wary of a liberal plot says these need to be analyzed diferently. At the top to silence the media. But it’s not two equal sides: it’s irra- of the food chain are “highly organized threat actors tionality versus enlightenment thinking,” Rogers said. like state-run operations or commercial ones like Cam- “Facebook’s job is to get people to click on links. They bridge Analytica.” At the bottom are decentralized don’t want to combat this. They run afliate marketing purveyors such as trolls, 4chan, click bait purveyors, conferences teaching people how to do this. They got the legendary Macedonian teenagers. away with it for years until the 2016 elections. Face- One is high quality and comes from fully fedged media book is siphoning of all the ad revenues of these clicks. operations such as Breitbart or Russia Today. “It’s more Twitter has no incentive from a business model per- pernicious because at least 60% of RT is high quality spective to kick of the bots because their stock price is journalism and it looks diferent from the low quality directly tied to their user count. The bots make them sites put up quickly to get eyeballs,” Rogers said. look bigger and more popular than they are.”

Others more, optimistically, believe that detecting Will it even work? and even suppressing false/dis information can be Another problem is that much of the mis/disinfor- automated nearly completely using AI and natural mation is put on small fake sites that are new and language processing but all admit that there is some change constantly so it may be impossible. Orga- uncertainty as to how this can be done as well as limits nizations like Mediabiasfactcheck.com can’t keep on how efective it will be. up with the new sites constantly appearing, says They also note that tech solutions will take seven-ten Romero. Further, these small sites inject their false years to implement and so aren’t any faster than wait- information into Facebook and Twitter where it gets ing for comprehensive government regulations. “Pri- circulated further. vate sector solutions are all seven to ten years down the The human factor also comes into play as it’s hard to road. I don’t see one coming up in one to two years,” see how diferent tech solutions can be applied all over said Joe Ellis from Vidrovr. the world in places with diferent values. “How do you handle a global platform operating in diferent places

______Entrepreneurial Solutions to Online Disinformation: Seeking Scale, Trying for Profit"| 225 with diferent values? Even if the technology existed, An article by Will Oremus in Slate in January 2019, the application in diferent contexts would be a night- “Just Trust Us,” criticized NewsGuard for daring to mare,” says Romero. make decisions as to what is trustworthy and what According to Marie Frenay, a member of the ofce of is not, pointing out the surprisingly positive rating the European Commission’s Vice-President and Com- for FoxNews.com and extremely negative rating for Al missioner for Digital Single Market Andrus Ansip, Jazeera and speculating that in the future, “the ratings “There are very promising research projects and start- authorities [could] become too powerful.” ups which explore the potential of AI to detect dis- “Making decisions about what misinformation to sup- information, identify patterns. We need to continue press or promote almost has to be done anonymously investing in this area. At the same time, it is also clear because if people know who is behind the efort they that human expertise is needed. I see technologies as may not trust it. It’s strange but people almost seem tools that can assist disinformation experts in their to trust Facebook more than they would trust another work. It is about complementarity. As disinformation is group. The second you know who is behind the efort conducted more and more subtly and covertly, making people will start arguing about whether the group is it harder to detect and attribute, we need the best of qualifed to pass judgement,” Reg Chua, chief oper- human brains and machines to address it.” ating ofcer, editorial, of Thomson Reuters said in an interview. “It’s not that I am in favor of secret cabals Paradox of rating systems deciding what we read; but more that all having a group Others worry about whether the rating systems can vet information does is move the debate about who to be gamed, politicized or corrupted in the same way trust upstream—from the news source to the vetters. Moody’s was before the 1997 fnancial crisis. At that Newsguard co-founder, Gordon Crovitz, points out that time, Moody’s business model came under fre because, research on this question shows the opposite: Consumers as well as providing ratings, they also sold services to are willing to trust journalists to rate other journalists so countries wanting to improve their ratings. They were long as they operate in a transparent and fully disclosed also notorious for giving high ratings to countries that manner and are not willing to trust Silicon Valley’s collapsed shortly after. The problems with the sover- non-disclosed algorithms.”5 eign rating agency model can be applied to the rating agencies in the disinformation/news sector. Fixing symptoms not causes Julia Angwin said, “A better model would be peer One criticism of the companies trying to use AI to look accreditation, where journalists band together to at mis/disinformation is that they ignore its underlying enforce a set of standards on their industry and causes and sources. For Alejandro Romero, “the social only include outlets that meet the criteria.” Danny networks are the last building block,” in a process that Rogers is working on developing international stan- begins with entities that fnd vulnerabilities in society dards. This idea of an industry standard is also being and then target them, stoking fears about subjects such attempted by the Paris-based NGO Reporters With- as immigration in order to afect the integrity of elec- out Borders which is working with the European tions and political decision making. While the media is Standards Authority and an international coalition being targeted it is governments that are being afected. of journalists in order to come up with a list of cred- What do the startups think about regulation? ible media outlets that follow internationally agreed- upon standards. The list has already been criticized In our interviews, we found mixed views as to whether for including RT, the Russian television network. government regulation would be a good idea. Some entrepreneurs such as Joe Ellis and Gordon Krovitz Indeed, making such lists transparent opens up the rat- from Newsguard believe in free market approaches. ing entities to criticism. It is ironic that, given the desire Mark Little from Nevalabs said, “I am afraid of regu- for more transparency online, the entities that are rat- lation that doesn’t solve the problem but will make the ing news sites are not trusted by people if they know perception of elite control of the media worse.” who is behind them.

226 |"SIPA’s Entrepreneurship & Policy Initiative Working Paper Series ______Ellis also seemed wary of regulation, “The disinforma- Conclusion tion question is really hard. I don’t know how to solve In this world of fragmentation, the small startups using it. The best way to try to solve it is to give as much AI and natural language processing are another niche to power to actual consumer as possible so that search watch. It’s clear that the entrepreneurs we interviewed involves user intentions.” see a business opportunity in using people and natural Others are open to the idea of regulation but have not language processing to identify and possibly remove looked closely at the details. mis/disinformation online. However, the limitations are Bringing back Section 230 would fx the problem all too apparent. The frst is that AI is not subtle enough straight away said Eric Feinberg and Julia Angwin. It to identify all of the myriad forms of dis/information would restore Intermediary liability, which was imple- currently contaminating the information ecosystem. mented in Germany at the start of 2019 and opposed Second, even if it were possible to use the technology at by internets rights groups. scale, there is little evidence that Facebook, Google and Twitter would use it—one of many reasons why regula- Other ideas include: tion of the platforms is essential. Third, these tech-based • removal of programmatic ads network solutions don’t address the larger economic, social and • legislation for more human fact checkers political reasons that dis/misinformation spread. • Twitter promotes ads that pay them. Instead it “Platforms will not take on active defense of truth telling could promote fact-checked content institutions [and] even if the platforms wanted to fx the • Strong privacy regulations would help prevent problem, they can only have an important but limited exploitation, identity theft, micro targeting. impact in the disinformation landscape. They are a con- “A ban on microtargeting would work. We strongly rec - tributor to a massively organized disinformation system. ommend European Commission to regulate Facebook, But the digital ecosystem is broken and the possibilities regulate algorithms so that they don’t micro target based of gaming the system are endless,” says Romero. on creating small information bubbles that fragment, atomize and divide our societies,” suggests Frantisek Vrabel from Semantic Visions in the Czech Republic. As all noted, the threat of regulation will be an intrinsic part of getting the platforms to do more self-policing. “There is a approach. Big tech’s incen- tives are that advertisers are pushing that but the stick is even more powerful and governments are pushing for that,” said Oren Levi from Adverifai.

______Entrepreneurial Solutions to Online Disinformation: Seeking Scale, Trying for Profit"| 227 Individual companies

FAST FACTS FAST FACTS Brandwatch—monitors the web and sells the infor- Factmata: “I think legislation will drive the platforms mation to companies who care about their brand. to do things and is needed urgently. It’s a that it Merged with Crimson Hexagon in 2018 and will con- has come to regulation and that they haven’t taken it tinue to be called Brandwatch. on properly.” –Dhruv Ghulati Background: “We work with the language of attribu- What is it: Software company that provides AI tools tion rather defning what is true or false. We are not that detect specifc types of disinformation. For busi- an encyclopedia or research institute. We can’t tell you nesses, they monitor their brand online to eliminate what the GDP of Peru is but we can we can give you undesirable information about them. tools to determine where a story came from and where How funded: Has raised $1.8m from seed funding. it’s likely to go. We give people a more structured away to understand sources and make intelligent decisions. Staff size: ~10 That’s how it should be. The world shouldn’t abandon Launch date: 2017 posture of neutrality towards information. I am not Future plans: They are developing browser extensions comfortable with saying what is true and what is not, ” and an app for individuals. said Paul Siegel. “Fear, uncertainty, doubt opens up pocket books better FAST FACTS than consumer insights do.” Global Disinformation Index: “Right now we Brandwatch: “A true story looks the same as a false have the most brand unsafe environment in the history story. Both are a collection of sentences.” “There is a of advertising. It’s the Wild West. Platforms have no distinction between language and truth that is hard to incentive to actually secure themselves. We’re trying make.” –Paul Siegel to catalyze grass roots support and get the advertising buyers to have a say.” –Danny Rogers What is it: UK-based company that monitors the web and sells clients data on the online public perception What is it: A UK non-proft trying to make an of their brand. “AI-powered classifer which can identify junk domains automatically” and would then work with How funded: Privately held, VC funded as well as programmatic ad networks so that they have a private clients. “dynamic blacklist of sites thereby choking of fund- Staff size: Over 550 ing for disinformation networks.” Launch date: 2006 “We want the Global Disinformation Index to be the Future plans: Sees a market for selling services to ones to take on the risk. We have no skin in the game government, government contractors and enterprises and can provide transparent, neutral ratings that plat- that are targeted by disinformation. forms and the brand safety community can use.”

228 |"SIPA’s Entrepreneurship & Policy Initiative Working Paper Series ______“The goal is to have a couple of products. One a FAST FACTS self-updating blocklist of junky open web sites that are Vett News: “People would like a tool that when they worth blocking in the ad exchange. This can be used are reading an article would allow them to understand by ad tech community to allow them to block ad buy if it’s fake or not, good or not, opinion or not, valida- on junky sites. No one company wants to take a stand tion or confrmation” –Paul Glader or say ‘this is good or bad’. So we want to be neutral and transparent and be the risk-absorbing entity,” What is it: Like NewsGuard, they currently provide a Rogers said. Chrome extension that rates news sites based on their reliability; green for trustworthy and red for unreliable. Funding: $50,000 grant from Knight Foundation. Now has $1 million in seed money. Other funders How funded: Boot strap, not yet ready to raise funding. include USAID and Luminate. Staff size: About fve. Staff size: Three co-founders. Launch date: 2017 Launch date: 2018 Future plans: Get a browser into schools and libraries and the ad tech market and be bought by Future plans: Is a member of the Reporters Without Borders “technical advisory committee” and working Facebook or Twitter. with with the European Standards Organization to get consensus-based standards developed for media Fast facts outlets who opt-in. Eventually, this could lead to Vidrovr: Instead of passively receiving information, certifcation of outlets that meet certain standards of Ellis hopes that Vidrovr can help audiences as well transparency and other criteria. as video producers become more active about what they see. FAST FACTS “You are at the behest of what the algorithm NewsGuard: chooses to show you. Radical transparency will build trust so audiences can know who you are What is it: Ratings system that assigns red or green ratings called “nutrition labels” to thousands and what you have done. We think that will help of news sites around the world. Can be used as a combat misinformation and help companies mon- browser extension. etize their video, allowing them to be transparent and provide intent into user search,” Joe Ellis who How funded: Private investors including the found- took leave from his Columbia PHD program to ers. Charges the digital platforms to grant their users run Vidrovr. access to the nutrition labels instead of charging the actual publications being rated. “When you start a company, you have a vision for building it and that can get derailed by running out of Staff size: Stands at 28, currently expanding. money and going out of business or you get gobbled Launch date: 2018 up by a large company that thinks what you are doing is really interesting. Google is very good at, building search solutions and use machine learning technology to make content available, and has a lot of talent. Can a startup do better than the big companies? usually not.”

______Entrepreneurial Solutions to Online Disinformation: Seeking Scale, Trying for Profit"| 229 What is it: Software that helps companies and news FAST FACTS outlets index, annotate and search their videos. Metafact: “MetaFact is creating disinformation How funded: Got tech start start-up money aimed defense solutions for newsrooms, brands and organiza- at student projects and then VC money, raised tions. By leveraging next-gen technology like advanced $1.25 million Al, and by analyzing pattern and bucketing data sets, Metafact helps newsrooms to understand if a certain Clients: Noting that they can’t disclose most of their discourse around a particular topic is genuine, or is a customers, Ellis says Vidrovr “work[s] with some of targeted campaign trail orchestrated to change public the largest broadcasters in the US.” opinion or infict fnancial damage. Staff size: 7 Profling bots that spread false claims is of paramount Launch date: 2016 importance. Profling human-run bot-like accounts is Future plans: Switch video viewing from social sites to tougher, yet achievable with a claims-frst approach. an OTT (personal), more mobile based platform. By being able to detect a claim as soon as it’s uploaded online, our tool is able to track the interaction of bot FAST FACTS accounts and infuencers long before any other tool is able to detect it as a threat. By using our claims-frst GIPEC: Eric Feinberg was in advertising technology. Has a patent on what he calls “anticipatory intelli- approach we can proactively detect, monitor, and defend gence.” He is spitting mad after he began fnding unsa- brands from disinformation attacks before they gain vory content online including ISIS posts calling for momentum and infict fnancial losses.” –Sagar Kaul attacks on US troops. What is it: A company that builds AI-based disinfor- How it works: his software looks for words like mation detection and defense solutions. “caliphate”, beginning with hashtags and then trails How funded: Bootstrap, friends and family. Obtained it through the web. “My systems dig through all a grant for $20,000 from Hanyang University in South accounts using it.” Now Feinberg has a faux account Korea. Kaul said “We also just completed an acceler- and so the algorithms pushing pro-Jihadi content to ator program in Ireland through the National Digital him as well as to ISIS sympathizers. Research Centre (NDRC) sponsored by the Irish Feinberg notes that he is “not going after the top, it’s government and Enterprise Ireland. Metafact was the the peer-to-peer, the sympathizers…You’ve got ISIS, frst startup selected from outside of Ireland and got radical jihadists getting radicalized in Indonesia, 30,000 euros. We are working on IBM Watson plat- Bangladesh, Philippines….Facebook’s algorithm has form since we were selected for IBM Global Entrepre- connected the world for radical jihad.” neur program that provides IBM Cloud credits.” What is it: Monitors where clients’ ads are going online Staff size: 7 and generally helps its clients protect their brands Launch date: Prototype was launched in April 2018. online. Combines their research and reports with enti- MVP for the tool will be launched by the end of ties such as the Digital Citizens Alliance. (Digital Citi- July 2019. zens Alliance is a nonproft 501(c)(6) organization. Its Future plans: Launching the tool for media houses, IRS form doesn’t indicate its donors, simply stating that businesses and corporations. Developing an AI says DCA’s revenue comes from “program services.” enabled media literacy app for kids: “Media liter- How funded: Bootstrap, looking for capital. acy has to play an important part. We’re developing Staff size: Unknown a mobile app that limits what kids can see on their phones and at the same time help them understand Launch date: 2015 about echo chambering and how disinformation is Future plans: Hopes to be licensed, funded and to spread. AI will be an integral part of the app. It will work with social media companies to reduce extremist understand the needs of the kids and based on that the content on their platforms app can give recommendations for reading material.

230 |"SIPA’s Entrepreneurship & Policy Initiative Working Paper Series ______FAST FACTS FAST FACTS AdVerif.ai: “Terrorism and violence are traditional[ly Alto Data Analytics: “Alto was not created to research what companies are trying to eliminate]. Our focus is disinformation. We just found it.” –Alejandro Romero more fake news [that is] more challenging for technol- What is it: The company provides software that mon- ogy to detect.” –Or Levi itors, retrieves and presents data that allow businesses What is it: Developing tools that use natural to understand customer insights and ofers help from language processing to see if something is fake or a team of experts. suspect. Creating a blacklist for publishers and adver- How funded: tisers who want to protect their brand from being associated with fake content, and an API to help Staff size: Over 100 people. screen fake contact. Launch date: 2012 How funded: Bootstrap for now, hopes to be bought some day. FAST FACTS Semantic Visions: “We don’t focus on analysis of Staff size: One founder online social networks but we focus on online news. Launch date: Company began in 2017, tool to In our experience the disinformation and propaganda launch soon. start on news sites and blogs” –Frantisek Vrabel

FAST FACTS What is it: A large, speedy database and a web mining system that are used for risk assessment and monitor- Truepic: “We want to make sure anyone in the world ing. Has roots in the defense industry and uses open with a smart phone has the ability to capture an image source intelligence. and prove its contents are real.” –Mounir Ibrahim, SIPA alum, who worked at USUN How funded: Work for corporate clients (real-time risk detection solution integrated into SAP Ariba What is it: Image verifcation technology. Truepic business commerce platform risk) pays for the work has several products. Users can download the free on disinformation. No government funding or fund- app and whenever they take a picture the system will ing from the platforms. Recently won a $250,000 log the time, date, location and pixilation and assign grant from the US Department of State to help fund it an encrypted code that will be stored in the cloud. the development of cutting-edge new technology to Truepic also has developed a remote inspections combat disinformation online6. The grant is going platform—known as Truepic Vision—for enterprise to come from the US Department of State’s Global clients in Insurance, Banking and Lending. Engagement Center. How funded: Not proftable but generating revenue, Staff size: 25 raised $8m in 2018 Launch date: 2005 Staff size: 30 Launch date: 2014 Future plans: Have commercial applications

______Entrepreneurial Solutions to Online Disinformation: Seeking Scale, Trying for Profit"| 231 Acknowledgements

Supported by a grant from the Nasdaq Educational Foundation on behalf of the School of International & Public Afairs (SIPA) Entrepreneurship & Policy Initiative

About the SIPA Entrepreneurship & Policy Initiative This paper was made possible with support from the Columbia University’s School of International and Nasdaq Educational Foundation. Thanks to John Public Afairs (SIPA) Entrepreneurship & Policy Ini- Batelle, Justin Hendrix and Raju Narisetti for intro- tiative engages scholars, entrepreneurships, and lead- ductions to the frms we profled. I am also grate- ers from the public and private sectors to advance ful for the research provided by Chloe Oldham and understanding of how to promote innovation, entre- edits suggested by Hollie Russon Gilman. Above all, preneurship, and social entrepreneurship. In 2016, thanks to our interviewees for their time and to Dean the Nasdaq Educational Foundation awarded SIPA Merit Janow and the Committee that allocated the a multi-year grant to support initiatives in entrepre- research funding. neurship and public policy. Since 2016, the Entre- preneurship & Policy Initiative has been a thought leader on topics including the Internet of Things (IoT), global education technology, cryptocurrencies and the new technologies of money, and the urban innovation environment.

232 |"SIPA’s Entrepreneurship & Policy Initiative Working Paper Series ______End Notes

1. Merrill, Jeremy & Ariana Tobin. “Facebook Moves to Block Ad Transparency Tools—Including Ours,” Propublica, January 28, 2019.

2. Controlling the Assault of Non-Solicited Pornography and Marketing, signed by President Bush in 2003.

3. Brandwatch on Crunchbase. www.crunchbase.com/organization/brandwatch

4. Wikipedia and Garth Jowett; Victoria O’Donnell (2005), “What Is Propaganda, and How Does It Difer From ?”, Propaganda and Persuasion, Sage Publications, pp. 21–23, ISBN 978-1-4129-0898-6

5. NewsGuard’s Online Source Rating Tool, www.newsguardtech.com/wp-content/uploads/2019/01/ Gallup-NewsGuards-Online-Source-Rating-Tool-User- Experience-1.pdf.

6. www.gov.uk/government/news/semantic-visions-wins- 250000-tech-challenge-to-combat-disinformation

______Entrepreneurial Solutions to Online Disinformation: Seeking Scale, Trying for Profit"| 233