fbpx

You may have heard the outlandish claim: Bill Gates is using the Covid-19 vaccine to implant microchips in everyone so he can track where they go. It’s false, of course, debunked repeatedly by journalists and fact-checking organizations. Yet the falsehood persists online — in fact, it’s one of the most popular conspiracy theories making the rounds. In May, a Yahoo/YouGov poll found that 44 percent of Republicans (and 19 percent of Democrats) said they believed it.

How online misinformation spreads

This particular example is just a small part of what the World Health Organization now calls an infodemic — an unprecedented glut of information that may be misleading or false. Misinformation — false or inaccurate information of all kinds, from honest mistakes to conspiracy theories — and its more intentional subset, disinformation, are both thriving, fueled by a once-in-a-generation pandemic, extreme political polarization and a brave new world of social media.

“The scale of reach that you have with social media, and the speed at which it spreads, is really like nothing humanity has ever seen,” says Jevin West, a disinformation researcher at the University of Washington.

One reason for this reach is that so many people are participants on social media. “Social media is the first type of mass communication that allows regular users to produce and share information,” says Ekaterina Zhuravskaya, an economist at the Paris School of Economics who coauthored an article on the political effects of the Internet and social media in the 2020 Annual Review of Economics.

Trying to stamp out online misinformation is like chasing an elusive and ever-changing quarry, researchers are learning. False tales — often intertwined with elements of truth — spread like a contagion across the Internet. They also evolve over time, mutating into more infectious strains, fanning across social media networks via constantly changing pathways and hopping from one platform to the next.

Misinformation doesn’t simply diffuse like a drop of ink in water, says Neil Johnson, a physicist at George Washington University who studies misinformation. “It’s something different. It kind of has a life of its own.”

how hate spreed online

Each dot in this diagram represents an online community hosted on one of six widely used social media networks. (Vkontakte is a largely Russian network.) Black circles indicate communities that often contain hateful posts; the others are clusters that link to those. The green square near the center is a particular Gab community that emerged in early 2020 to discuss the pandemic but quickly began to include misinformation and hate. Communities connect to one another with clickable links, and while they often form discrete groups within a platform, they can also link to different platforms. Such links can break and reconnect, creating changing pathways through which misinformation can travel. Breaking these links and preventing new ones from forming could be an effective way for society to control the spread of hate and misinformation.

The Gates fiction is a case in point. On March 18, 2020, Gates mentioned in an online forum on Reddit that electronic records of individuals’ vaccine history could be a better way to keep track of who had received the Covid-19 vaccine than paper documents, which can be lost or damaged. The very next day, a website called Biohackerinfo.com posted an article claiming that Gates wanted to implant devices into people to record their vaccination history. Another day later, a YouTube video expanded that narrative, explicitly claiming that Gates wanted to track people’s movements. That video was viewed nearly 2 million times. In April, former Donald Trump advisor Roger Stone repeated the conspiracy on a radio program, which was then covered in the New York Post. Fox News host Laura Ingraham also referred to Gates’s intent to track people in an interview with then US Attorney General William Barr.

But though it’s tempting to think from examples like this that websites like Biohackerinfo.com are the ultimate sources of most online misinformation, research suggests that’s not so. Even when such websites churn out misleading or false articles, they are often pushing what people have already been posting online, says Renee DiResta, a disinformation researcher at the Stanford Internet Observatory. Indeed, almost immediately after Gates wrote about digital certificates, Reddit users started commenting about implantable microchips, which Gates had never mentioned.

In fact, research suggests that malicious websites, bots and trolls make up a relatively small portion of the misinformation ecosystem. Instead, most misinformation emerges from regular people, and the biggest purveyors and amplifiers of misinformation are a handful of human super-spreaders. For example, a study of Twitter during the 2016 election found that in a sample of more than 16,000 users, 6 percent of those who shared political news also shared misinformation. But the vast majority — 80 percent — of the misinformation came from just 0.1 percent of users. Misinformation is amplified even more when those super-spreaders, such as media personalities and politicians like Donald Trump (until his banning by Twitter and other sites), have access to millions of people on social and traditional media.

Thanks to such super-spreaders, misinformation spreads in a way that resembles an epidemic. In a recent study, researchers analyzed the rise in the number of people engaging with Covid-19-related topics on Twitter, Reddit, YouTube, Instagram and a right-leaning network called Gab. Fitting epidemiological models to the data, they calculated R-values which, in epidemiology, represent the average number of people a sick person would infect. In this case, the R-values describe the contagiousness of Covid-19-related topics in social media platforms — and though the R-value differed depending on the platform, it was always greater than one, indicating exponential growth and, possibly, an infodemic.

media P gates collins fauci

Bill Gates (left) meeting with Francis Collins (center), director of the National Institutes of Health, and Anthony Fauci (right), director of the National Institute of Allergy and Infectious Diseases, in 2017. Gates is one of many victims of online misinformation: In early 2020, he made a casual comment about better ways to keep electronic records of individuals’ vaccine history. Social-media users twisted this into a false conspiracy theory in which Gates and others would use the Covid-19 vaccine to implant microchips in everyone to remotely track their movements.

CREDIT: NATIONAL INSTITUTES OF HEALTH

Differences in how information spreads depend on features of the particular platform and its users, not on the reliability of the information itself, says Walter Quattrociocchi, a data scientist at the University of Rome. He and his colleagues analyzed posts and reactions — such as likes and comments — about content from both reliable and unreliable websites, the latter being those that show extreme bias and promote conspiracies, as determined by independent fact-checking organizations. The number of posts and reactions regarding both types of content grew at the same rate, they found.

Complicating matters more, misinformation almost always contains kernels of truth. For example, the implantable microchips in the Gates conspiracy can be traced to a Gates Foundation-funded paper published in 2019 by MIT researchers, who designed technology to record someone’s vaccination history in the skin like a tattoo. The tattoo ink would consist of tiny semiconductor particles called quantum dots, whose glow could be read with a modified smartphone. There are no microchips, and the quantum dots can’t be tracked or read remotely. Yet the notion of implanting something to track vaccination status has been discussed. “It isn’t outlandish,” Johnson says. “It’s just outlandish to say it will then be used by Gates in some sinister way.”

What happens, Johnson explains, is that people pick nuggets of fact and stitch them together into a false or misleading narrative that fits their own worldview. These narratives then become reinforced in online communities that foster trust and thus lend credibility to misinformation.

Johnson and his colleagues track online discussion topics in social media posts. Using machine learning, their software automatically infers topics — say, vaccine side effects — from patterns in how words are used together. It’s similar to eavesdropping on multiple conversations by picking out particular words that signal what people are talking about, Johnson says.

And as in conversations, topics can evolve over time. In the past year, for example, a discussion about the March lockdowns mutated to include the US presidential election and QAnon conspiracy theories, according to Johnson. The researchers are trying to characterize such topic shifts, and what makes certain topics more evolutionarily fit and infectious.

Some broad narratives are especially tenacious. For example, Johnson says, the Gates microchip conspiracy contains enough truth to lend it credibility but also is often dismissed as absurd by mainstream voices, which feeds into believers’ distrust of the establishment. Throw in well-intentioned parents who are skeptical of vaccines, and you have a particularly persistent narrative. Details may differ, with some versions involving 5G wireless networks or radiofrequency ID tags, but the overall story — that powerful individuals want to track people with vaccines — endures.

And in online networks, these narratives can spread especially far. Johnson focuses on online groups, like public Facebook pages, some of which can include a million users. The researchers have mapped how these groups — within and across Facebook and five other platforms, Instagram, Telegram, Gab, 4Chan and a predominantly Russian-language platform called VKontakte — connect to one another with weblinks, where a user in one online group links to a page on another platform. In this way, groups form clusters that also link to other clusters. The connections can break and relink elsewhere, creating complex and changing pathways through which information can flow and spread. For example, Johnson says, earlier forms of the Gates conspiracy were brewing on Gab only to jump over to Facebook and merge with more mainstream discussions about Covid-19 vaccinations.

These cross-platform links mean that the efforts of social media companies to take down election- or Covid-19-related misinformation are only partly effective. “Good for them for doing that,” Johnson says. “But it’s not going to get rid of the problem.” The stricter policies of some platforms — Facebook, for example — won’t stop misinformation from spilling over to a platform where regulations are more relaxed.

Screenshot 1

Researchers compared how often users of several social networks reacted to posts (e.g. liked, shared or commented) from unreliable vs. reliable sources. Users of the predominantly right-wing network Gab reacted to unreliable information 3.9 times as often as reliable information, on average. In contrast, YouTube users mostly engaged with reliable information, while users of Reddit and Twitter fell in between. The way that misinformation spreads online seems to depend largely on the characteristics of a network and its users, the researchers say. (X axis shows the ratio of reactions to information from unreliable vs. reliable sources.)

And unless the entire social media landscape is somehow regulated, he adds, misinformation will simply congregate in more hospitable platforms. After companies like Facebook and Twitter started cracking down on election misinformation — even shutting down Trump’s accounts — many of his supporters migrated to platforms like Parler, which are more loosely policed.

To Johnson’s mind, the best way to contain misinformation maybe by targeting these inter-platform links, instead of chasing down every article, meme, account or even page that peddles in misinformation — which is ultimately a futile game of Whac-A-Mole. To show this, he and his colleagues calculated a different R-value. As before, their revised R-value describes the contagiousness of a topic, but it also incorporates the effects of dynamic connections in the underlying social networks. Their analysis isn’t yet peer-reviewed, but if it holds up, the formula can provide a mathematical way of understanding how a topic might spread — and, if that topic is rife with misinformation, how society can contain it.

For example, this new R-value suggests that by taking down cross-platform web links, social media companies or regulators can slow the transmission of misinformation so that it no longer spreads exponentially. Once regulators identify an online group brimming with misinformation, they can then remove links to other platforms. This needs to be the priority, Johnson says, even more than removing the group pages themselves.

Fact-checking may also help, as some studies suggest it can change minds and even discourage people from sharing misinformation. But the impact of a fact-check is limited, because corrections usually don’t spread as far or as fast as the original misinformation, West says. “Once you get something rolling, it’s real hard to catch up.” And people may not even read a fact-check if it doesn’t conform to their worldview, Quattrociocchi says.

Other approaches, such as improving education and media literacy, and reforming the business model of journalism to prioritize quality over clicks, are all important for controlling misinformation — and, ideally, for preventing conspiracy theories from taking hold in the first place. But misinformation will always exist, and no single action will solve the problem, DiResta says. “It’s more of a problem to be managed like a chronic disease,” she says. “It’s not something you’re going to cure.”

[Source: This article was published in knowablemagazine.org By Marcus Woo - Uploaded by the Association Member: Juan Kyser] 
Categorized in Investigative Research

At the beginning of August 2019, a young white man entered a Walmart in El Paso, Texas, and opened fire with an AK-47-style rifle ordered online, killing 22 people and injuring 25 more. Less than an hour after the shooting was reported, internet researchers found an anti-immigrant essay uploaded to the anonymous online message board 8chan. Law enforcement officials later said that before the shooter opened fire, they were investigating the document, which was posted minutes before the first calls to 911. The essay posted on 8chan included a request: “Do your part and spread this brothers!”

That was the third time in 2019 that a gunman posted a document on 8chan about his intent to commit a mass shooting. All three of the pieces of writing from shooters posted online that year were loaded with white supremacist beliefs and instructions to share their message or any video of the shooting far and wide. The year prior, a man who entered into a synagogue outside of Pittsburgh and opened fire was an active member of online forums popular amongst communities of hate, where he, too, signaled his intent to commit violence before he killed. In 2017, the deadly Unite the Right rally in Charlottesville, Virginia, was largely organized in online forums, too. And so it makes sense that in recent years newsrooms are dedicating more reporters to covering how hate spreads over the internet.

Online hate is not an easy beat.  First off, there’s the psychological toll of spending hours in chat rooms and message boards where members talk admiringly about the desire to harm and even kill others based on their race, religion, gender and sexual orientation. Monitoring these spaces can leave a reporter feeling ill, alienated and fearful of becoming desensitized. Secondly, some who congregate in online communities of hate are experts at coordinating attacks and promoting violence against those who they disagree with, including activists and journalists who write about them. Such harassment occurs both online and offline and can happen long after a report is published.

Consider a case from my own experience, where my reporting triggered a harassment campaign. In February 2019, I published an investigation of an e-commerce operation that Gavin McInnes, founder of the far-right men’s group the Proud Boys, whose members have been charged with multiple counts of violence, described as the group’s legal defense fund. During the course of my reporting, multiple payment processors used by the e-commerce site pulled their services. In the days after the article published, I received some harassment on Twitter, but it quickly petered out. That changed in June, after the host of a popular channel on YouTube and far-right-adjacent blogger Tim Pool made a 25-minute video about my story, accusing me of being a “left-wing media activist.” The video has since been viewed hundreds of thousands of times.

Within minutes of Pool’s video going live, the harassment began again. A dozen tweets and emails per-minute lit up my phone — some included physical threats and anti-Semitic attacks directed at my family and myself. A slew of fringe-right websites, including Infowars, created segments and blog posts about Pool’s video. I received requests to reset my passwords, likely from trolls attempting to hack into my accounts. Users of the anonymous message board 4chan and anonymous Twitter accounts began posting information directing people to find where I live.

What follows is general safety advice for newsrooms and journalists who report on hate groups and the platforms where they congregate online.

Securing yourself before and during reporting

Maintain a strong security posture in the course of your research and reporting in order to prevent potential harassers from finding your personal details. Much of the advice here on how to do that is drawn from security trainers at Equality Labs and Tall Poppy, two organizations that specialize in security in the face of online harassment and threats, as well as my own experience on the beat. It also includes resources that can help newsrooms support and protect reporters who are covering the online hate beat.

1.  Download and begin using a secure password manager. A password manager is an app that stores all your passwords, which helps with keeping and creating complex and distinct passwords for each account. With your password manager change or reset all your passwords to ensure you’re not using the same password across sites and that each password is tough to crack. You probably have more online accounts than you realize, so it might help to make a list. When updating passwords, opt for a two-factor authentication method when available. Use a two-factor authentication app, like Google Authenticator or Duo, rather than text messages, since unencrypted text messages can easily be compromised. 1Password is the password manager of choice for the experts both at Tall Poppy and Equality Labs.

2.  Search for your name on online directory and data broker sites like White Pages and Spokeo, which collect addresses and contact information that can be sold to online marketers, and request your entries be removed. Online harassment campaigns often start with a search of these sites to find their target’s home address, phone number and email. Many data broker sites make partial entries visible, so it’s possible to see if your information is listed. If it is, find the site’s instructions for requesting removal of your entry and follow the directions. Do the same for people who you live with, especially if they share your last name. There are also services that can thoroughly scrub your identifying information from dozens of online directories across the web for you, like Privacy Duck, Deleteme and OneRep.

3. Make aliases. If you have to create an account to use a social media site you’re researching, consider using an alternate email address that you delete or stop using after the course of reporting. Newsroom practices vary, so if your username must reveal who you are per your employer’s policy, check with your editor about using your initials or not spelling out your publication in your username. It’s easy to make a free email address using Gmail or Hotmail. ProtonMail also offers free end-to-end encrypted email addresses.

4. Record your interactions with sources, as they may be recording their interactions with you. Assume every interaction you have is not only being recorded but might also be edited in an attempt to harass you or undercut your work. During one story I worked on about a hate-friendly social network, an employee of the website I interviewed recorded the interview, too. The founder of the site wasn’t happy with my report and proceeded to make a Periscope video of him attempting to discredit the story by replaying my interview, courting thousands of views. If you’re at a rally, bring spare batteries and ensure you have enough space on your phone to record your interactions or have a colleague with you so you can record each other’s interactions, which help if you need evidence to discredit attempts to discredit you. Importantly, before you record any interview, check if the state you’re reporting from has a two-party consent law, which requires that both parties on the call consent to being recorded and may require you to alert your interviewee that you’re recording the call.

5. Use a Virtual Private Network (VPN) to visit the sites you’re investigating. VPNs hide where web traffic comes from. If you’re researching a website and visiting it frequently, your IP address, location or other identifying information could tip off the site’s owners that you’re poking around. Do your research, as some VPN services are more trustworthy than others. Equality Labs recommends using Private Internet Access. Wirecutter also has a good selection of recommended VPNs.

6. Tighten your social media privacy. Make sure all your social media accounts are secured with as little identifying information public as possible. Do a scan of who is following you on your personal accounts and ensure that there isn’t identifying information about where you live posted in any public place or shared with people who may compromise your safety. Consider unfriending your family members on social media accounts and explain to them why they cannot indicate their relationship to you publicly online. Likewise, be aware of any public mailing lists you may subscribe to where you may have shared your phone number or address in an email and ask the administrator of the email list to remove those emails from the public archive.

7. Ask your newsroom or editor for support. “Newsrooms have a duty of care to their staff to provide the tools that they need to stay safe,” says Leigh Honeywell, the CEO of Tall Poppy. Those tools may include paying for services that remove your information from data broker sites and a high-quality password manager. If your personal information does begin to circulate online, your newsroom should be prepared to contact social media platforms to report abuse and request the information be taken down. Newsroom leadership could also consider implementing internal policies around how to have their reporters’ backs in situations of online harassment, which could mean, for example, sifting through threats sent on Twitter and having a front desk procedure that warns anyone who answers the phone not to reveal facts such as whether certain reporters work at the office.

After publishing

If you do face harassment and threats online after your report is published, you may want to enlist the help of an organization that specializes in online harassment security. Troll storms usually run about one week, and the deluge on Twitter and over email usually lasts no more than a few days. Take space from the internet during this time and be sure your editors are prepared to help monitor your accounts should you become a target of harassment.  

1. Ask someone to monitor your social media for you. Depending on the severity and cadence of the harassment that follows publication, you may wish to assign a trusted partner, an editor or a friend, to monitor your social media for you. Often the harassment is targeted at journalists via social media accounts. It can be an extremely alienating experience, especially if consumed through a smartphone, because no one fully sees what’s happening except the person targeted. During these moments, it’s best to step away from social media and not watch it unfold. This is often hard to do, because it’s also important to stay aware of incoming threats or attempts to find your home and family. Whoever is monitoring your social media should report accounts that send harassment, threats, obscenities and bigotry.

2. Don’t click on links from unknown senders. If you receive a text message from an unknown number or an email to reset a password, do not click on any links or open any attachments. Likewise, consider only opening emails in plain-text mode to ensure photos and malicious files do not download automatically. Be extra careful about links in text messages, as it’s rare for a password reset to come through a text message and it could be an attempt to verify your phone number by a harasser or to install malware on your phone. If you get suspicious texts or emails, contact whoever you consult for security.

3. Google yourself (or ask someone you trust to Google your name for you). When the harassment begins, someone should be checking social media and anonymous websites, like 4chan, Gab.ai and 8kun, which is how 8chan rebranded in 2019, for mentions of your name, address, phone number and portions of your address. 4chan and Gab.ai have policies against posting personal information, like emails, physical addresses, phone numbers or bank account information — a practice called doxing — and should remove identifying content when requested. Twitter, Facebook, LinkedIn and more popular social networks do, too. Also, set a Google alert for your name to see if you’re being blogged about. If you or your newsroom can afford it, consider working with a security expert who knows how to monitor private Discord chat groups, private Facebook groups, 8kun, Telegram and other corners of the internet where harassment campaigns are hatched.

4. Know when to get law enforcement involved. If a current or former address of yours begins to emerge online or if you’re receiving threats of violence, call your local police non-emergency line and let them know that an online troll may misreport an incident in the hopes of sending a team of armed police to your home — a practice known as swatting. Local police might not be accustomed to dealing with online threats or have a swatting protocol, but it’s worth making a call and explaining the situation to ensure that unnecessary force is not deployed if a fraudulent report is made.

5. Save your receipts. Check your email, check your bank account, and don’t delete evidence of harassment. If you receive emails that your passwords for online accounts are being reset, do not click on or download anything. Save all emails related to the harassment, too, as you may wish to refer to them later to see if a pattern emerges. The evidence might also be important if you need to prove to a business or law enforcement that you were the subject of a targeted campaign. Continue to monitor your bank account to ensure that fraudulent charges aren’t made and that your financial information is secure. Unfortunately, hacked credit cards and passwords abound online. You may decide to call your bank after being harassed and ask for a new debit card to be issued.

6. Let other journalists know what you’re going through. Remember, while it’s important to stay physically safe, the emotional toll is real, too. There’s no reason to go through online harassment alone. Don’t hesitate to reach out to other journalists on your beat at different publications to let them know your situation. Stronger communities make for safer reporting.

Read More...

[Source: This article was published in journalistsresource.org By April Glaser - Uploaded by the Association Member: Deborah Tannen] 

Categorized in Investigative Research

[This article is originally published in purdue.edu written By Chris Adam - Uploaded by AIRS Member: Grace Irwin]

New technology makes it easier to follow a criminal’s digital footprint

WEST LAFAYETTE, Ind. – Cybercriminals can run, but they cannot hide from their digital fingerprints.

Still, cybercrimes reached a six-year high in 2017, when more than 300,000 people in the United States fell victim to such crimes. Losses topped $1.2 billion.

Now, Purdue University cybersecurity experts have come up with an all-in-one toolkit to help detectives solve these crimes. Purdue has a reputation in this area – it is ranked among the top institutions for cybersecurity.

“The current network forensic investigative tools have limited capabilities – they cannot communicate with each other and their cost can be immense,” said Kathryn Seigfried-Spellar, an assistant professor of computer and information technology in the Purdue Polytechnic Institute, who helps lead the research team. “This toolkit has everything criminal investigators will need to complete their work without having to rely on different network forensic tools.”

The toolkit was presented in December 2018 during the IEEE International Conference on Big Data.

The Purdue team developed its Toolkit for Selective Analysis and Reconstruction of Files (FileTSAR) by collaborating with law enforcement agencies from around the country, including the High Tech Crime Unit of Tippecanoe County, Indiana. The HTCU is housed in Purdue’s Discovery Park.

FileTSAR is available free to law enforcement

FileTSAR is available free to law enforcement. The project was funded by the National Institute of Justice.

The Purdue toolkit brings together in one complete package the top open source investigative tools used by digital forensic law enforcement teams at the local, state, national and global levels.

“Our new toolkit allows investigators to retrieve network traffic, maintain its integrity throughout the investigation, and store the evidence for future use,” said Seunghee Lee, a graduate research assistant who has worked on the project from the beginning. “We have online videos available so law enforcement agents can learn the system remotely.”

FileTSAR captures data flows and provides a mechanism to selectively reconstruct multiple data types, including documents, images, email and VoIP sessions for large-scale computer networks. Seigfried-Spellar said the toolkit could be used to uncover any network traffic that may be relevant to a case, including employees who are sending out trade secrets or using their computers for workplace harassment.

“We aimed to create a tool that addressed the challenges faced by digital forensic examiners when investigating cases involving large-scale computer networks,” Seigfried-Spellar said.

The toolkit also uses hashing for each carved file to maintain the forensic integrity of the evidence, which helps it to hold up in court.

Their work aligns with Purdue's Giant Leaps celebration, celebrating the global advancements in artificial intelligence as part of Purdue’s 150th anniversary. This is one of the four themes of the yearlong celebration’s Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.

Categorized in Investigative Research

[This article is originally published in ijnet.org written by ALEXANDRA JEGERS - Uploaded by AIRS Member: Daniel K. Henry]

If you didn’t make it to Seoul for this year’s Uncovering Asia conference — or just couldn’t beat two panels at the same time — never fear, tipsheets from the impressive speakers are here! But just in case you can’t decide where to start, here are five presentations that are definitely worth checking out.

How to Make a Great Investigative Podcast

The human voice is a powerful tool. When someone is telling you a good story, you just can’t stop listening. It is, however, sometimes difficult to construct a good storyline for radio — especially if that’s new territory for you. In this excellent tipsheet, radio veteran Sandra Bartlett and Citra Prastuti, chief editor of Indonesian radio network Kantor Berita Radio, explain how to create images in your listener’s brain. Be sure to check out this story on some of their favorite investigative podcasts.

Best Verification Tools

From Russian trolls to teenage boys in MacedoniaCraig Silverman has exposed a wide gamut of disinformation operations around the world. He shared his experiences and research tips on a panel on fake news. Although years of experience like Silverman’s is certainly helpful, you don’t have to be an expert to spot fake news — or even a tech geek. In his tip sheet, Silverman continuously compiles tools that will help you to easily check out the accuracy of your sources.

Mojo in a Nutshell

Never heard of SCRAP or DCL? Then you are no different to most of the participants at the mojo workshop of award-winning television reporter Ivo Burum. Mojo is short for mobile journalism, which is becoming increasingly important in competitive, fast-moving newsrooms. Burum breaks down how to shoot, edit and publish an extraordinary video story just using your smartphone. Be sure not to miss his YouTube videos on mastering KineMaster or iMovie Basics or any of his regular columns on GIJN.

How to Track Criminal Networks

Transnational organized crime today generates $2 trillion in annual revenue, about the size of the UK economy, according to the UN Office on Drugs and Crime. It’s no wonder that, with that kind of cash on hand, authorities throughout the world often seem powerless to police them. But almost everybody leaves a digital trail, according to international affairs and crime reporter Alia Allana, who spoke at the Investigating Criminal Networks panel.

Web Scraping for Non-coders

Ever had a PDF document that you could not crawl with Ctrl + F? Or looked for specific information on a web page that has an endless number of pages? When documents have hundreds of pages or websites scroll for miles, it can be frustrating — not to mention time-consuming. With Pinar Dag and Kuang Keng Kueg Ser‘s guidance, you’ll be web scraping like a pro in no time.

This postwas originally published by the Global Investigative Journalism Network.

Alexandra Jegers is a journalist from Germany who has completed the KAS multimedia program. She has studied economics in Germany and Spain and now writes for Handelsblatt, Capital, and Wirtschaftswoche.

Main image CC-licensed by Unsplash via Evan Kirby.

Categorized in Investigative Research

Online Methods to Investigate the Who, Where, and When of a Person. Another great list by Internet search expert Henk Van Ess.

Searching the Deep Web, by Giannina Segnini. Beginning with advanced tips on sophisticated Google searches, this presentation at GIJC17 by the director of Columbia University Journalism School’s Data Journalism Program moves into using Google as a bridge to the Deep Web using a drug trafficking example. Discusses tracking the container, the ship, and customs. Plus, Facebook research and more.

Tools, Useful Links & Resources, by Raymond Joseph, a journalist and trainer with South Africa’s Southern Tip Media. Six packed pages of information on Twitter, social media, verification, domain and IP information, worldwide phonebooks, and more. In a related GICJ17 presentation, Joseph described “How to be Digital Detective.” 

IntelTechniques is prepared by Michael Bazzell, a former US government computer crime investigator and now an author and trainer. See the conveniently organized resources in left column under “Tools.” (A Jan. 2, 2018, blog post discusses newly added material.)

Investigate with Document Cloud, by Doug Haddix, Executive Director, Investigative Reporters and Editors. A guide to using 1.6 million public documents shared by journalists, analyzing and highlighting your own documents, collaborating with others, managing document workflows and sharing your work online.

Malachy Browne’s Toolkit. More than 80 links to open source investigative tools by one of the best open-source sleuths in the business. When this New York Times senior story producer flashed this slide at the end of his packed GIJC17 session, nearly everyone requested access.

Social Media Sleuthing, by Michael Salzwedel. “Not Hacking, Not Illegal,” begins this presentation from GIJC17 by a founding partner and trainer at Social Weaver.

Finding Former Employees, by James Mintz. “10 Tips on Investigative Reporting’s Most Powerful Move: Contacting Formers,” according to veteran private investigator Mintz, founder and president of The Mintz Group.

Investigative Research Links from Margot Williams. The former research editor at The Intercept offers an array of suggestions, from “Effective Google Searching” to a list of “Research Guru” sites.

Bellingcat’s Digital Forensics Tools, a wide variety of resources here: for maps, geo-based searches, images, social media, transport, data visualization, experts and more.

List of Tools for Social Media Research, a tipsheet from piqd.de’s Frederik Fischer at GIJC15.

SPJ Journalist’s Toolbox from the Society of Professional Journalists in the US, curated by Mike Reilley. Includes an extensive list of, well, tools.

How to find an academic research paper, by David Trilling, a staff writer for Journalist’s Resource, based at Harvard’s Shorenstein Center on Media, Politics and Public Policy.

Using deep web search engines for academic and scholarly research, an article by Chris Stobing in VPN & Privacy, a publication of Comparitech.com, a UK company that aims to help consumers make more savvy decisions when they subscribe to tech services such as VPNs.

Step by step guide to safely accessing the darknet and deep web, an article by Paul Bischoff in VPN & Privacy, a publication of Comparitech.com, a UK company that aims to help consumers make more savvy decisions when they subscribe to tech services such as VPNs.

Research Beyond Google: 56 Authoritative, Invisible, and Comprehensive Resources, a resource from Open Education Database, a US firm that provides a comprehensive online education directory for both free and for-credit learning options.

The Engine Room,  a US-based international NGO, created an Introduction to Web Resources, that includes a section on making copies of information to protect it from being lost or changed.

Awesome Public Datasets, a very large community-built compilation organized by topic.

Online Research Tools and Investigative Techniques by the BBC’s ace online sleuth Paul Myers has long been a starting point for online research by GIJN readers. His website, Research Clinic, is rich in research links and “study materials.”

Source: This article was published gijn.org

Categorized in Online Research

AOFIRS

World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.