Web Directories

Corey Parker

Corey Parker

Tuesday, 05 September 2017 02:39

Who Owns the Internet?

On the night of November 7, 1876, Rutherford B. Hayes’s wife, Lucy, took to her bed with a headache. The returns from the Presidential election were trickling in, and the Hayeses, who had been spending the evening in their parlor, in Columbus, Ohio, were dismayed. Hayes himself remained up until midnight; then he, too, retired, convinced that his Democratic opponent, Samuel J. Tilden, would become the next President.

Hayes had indeed lost the popular vote, by more than two hundred and fifty thousand ballots. And he might have lost the Electoral College as well had it not been for the machinations of journalists working in the shady corners of what’s been called “the Victorian Internet.”

Chief among the plotters was an Ohioan named William Henry Smith. Smith ran the western arm of the Associated Press, and in this way controlled the bulk of the copy that ran in many small-town newspapers. The Western A.P. operated in tight affiliation—some would say collusion—with Western Union, which exercised a near-monopoly over the nation’s telegraph lines. Early in the campaign, Smith decided that he would employ any means necessary to assure a victory for Hayes, who, at the time, was serving a third term as Ohio’s governor. In the run-up to the Republican National Convention, Smith orchestrated the release of damaging information about the Governor’s rivals. Then he had the Western A.P. blare Hayes’s campaign statements and mute Tilden’s. At one point, an unflattering piece about Hayes appeared in the Chicago Times, a Democratic paper. (The piece claimed that Hayes, who had been a general in the Union Army, had accepted money from a soldier to give to the man’s family, but had failed to pass it on when the soldier died.) The A.P. flooded the wires with articles discrediting the story.

Once the votes had been counted, attention shifted to South Carolina, Florida, and Louisiana—states where the results were disputed. Both parties dispatched emissaries to the three states to try to influence the Electoral College outcome. Telegrams sent by Tilden’s representatives were passed on to Smith, courtesy of Western Union. Smith, in turn, shared the contents of these dispatches with the Hayes forces. This proto-hack of the Democrats’ private communications gave the Republicans an obvious edge. Meanwhile, the A.P. sought and distributed legal opinions supporting Hayes. (Outraged Tilden supporters took to calling it the “Hayesociated Press.”) As Democrats watched what they considered to be the theft of the election, they fell into a funk.

“They are full of passion and want to do something desperate but hardly know how to,” one observer noted. Two days before Hayes was inaugurated, on March 5, 1877, the New York Sun appeared with a black border on the front page. “These are days of humiliation, shame and mourning for every patriotic American,” the paper’s editor wrote.

History, Mark Twain is supposed to have said, doesn’t repeat itself, but it does rhyme. Once again, the President of the United States is a Republican who lost the popular vote. Once again, he was abetted by shadowy agents who manipulated the news. And once again Democrats are in a finger-pointing funk.

Journalists, congressional committees, and a special counsel are probing the details of what happened last fall. But two new books contend that the large lines of the problem are already clear. As in the eighteen-seventies, we are in the midst of a technological revolution that has altered the flow of information. Now, as then, just a few companies have taken control, and this concentration of power—which Americans have acquiesced to without ever really intending to, simply by clicking away—is subverting our democracy.

Thirty years ago, almost no one used the Internet for anything. Today, just about everybody uses it for everything. Even as the Web has grown, however, it has narrowed. Google now controls nearly ninety per cent of search advertising, Facebook almost eighty per cent of mobile social traffic, and Amazon about seventy-five per cent of e-book sales. Such dominance, Jonathan Taplin argues, in “Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy” (Little, Brown), is essentially monopolistic. In his account, the new monopolies are even more powerful than the old ones, which tended to be limited to a single product or service. Carnegie, Taplin suggests, would have been envious of the reach of Mark Zuckerberg and Jeff Bezos.

Taplin, who until recently directed the Annenberg Innovation Lab, at the University of Southern California, started out as a tour manager. He worked with Judy Collins, Bob Dylan, and the Band, and also with George Harrison, on the Concert for Bangladesh. In “Move Fast and Break Things,” Taplin draws extensively on this experience to illustrate the damage, both deliberate and collateral, that Big Tech is wreaking.

Consider the case of Levon Helm. He was the drummer for the Band, and, though he never got rich off his music, well into middle age he was supported by royalties. In 1999, he was diagnosed with throat cancer. That same year, Napster came along, followed by YouTube, in 2005. Helm’s royalty income, which had run to about a hundred thousand dollars a year, according to Taplin, dropped “to almost nothing.” When Helm died, in 2012, millions of people were still listening to the Band’s music, but hardly any of them were paying for it. (In the years between the founding of Napster and Helm’s death, total consumer spending on recorded music in the United States dropped by roughly seventy per cent.) Friends had to stage a benefit for Helm’s widow so that she could hold on to their house.

Google entered and more or less immediately took over the music business when it acquired YouTube, in 2006, for $1.65 billion in stock. As Taplin notes, just about “every single tune in the world is available on YouTube as a simple audio file (most of them posted by users).” Many of these files are illegal, but to Google this is inconsequential. Under the Digital Media Copyright Act, signed into law by President Bill Clinton shortly after Google went live, Internet service providers aren’t liable for copyright infringement as long as they “expeditiously” take down or block access to the material once they’re notified of a problem. Musicians are constantly filing “takedown” notices—in just the first twelve weeks of last year, Google received such notices for more than two hundred million links—but, often, after one link is taken down, the song goes right back up at another one. In the fall of 2011, legislation aimed at curbing online copyright infringement, the Stop Online Piracy Act, was introduced. It had bipartisan support in Congress, and backing from such disparate groups as the National District Attorneys Association, the National League of Cities, the Association of Talent Agencies, and the International Brotherhood of Teamsters. In January, 2012, the bill seemed headed toward passage, when Google decided to flex its market-concentrated muscles. In place of its usual colorful logo, the company posted on its search page a black rectangle along with the message “Tell Congress: Please don’t censor the web!” The resulting traffic overwhelmed congressional Web sites, and support for the bill evaporated. (Senator Marco Rubio, of Florida, who had been one of the bill’s co-sponsors, denounced it on Facebook.)

Google itself doesn’t pirate music; it doesn’t have to. It’s selling the traffic—and, just as significant, the data about the traffic. Like the Koch brothers, Taplin observes, Google is “in the extraction industry.” Its business model is “to extract as much personal data from as many people in the world at the lowest possible price and to resell that data to as many companies as possible at the highest possible price.” And so Google profits from just about everything: cat videos, beheadings, alt-right rants, the Band performing “The Weight” at Woodstock, in 1969.

“I wasn’t always so skeptical,” Franklin Foer announces at the start of “World Without Mind: The Existential Threat of Big Tech” (Penguin Press). Franklin, the eldest of the three famous Foer brothers, is a journalist, and he began his career, in the mid-nineties, working for Slate, which had then just been founded by Microsoft. The experience, Foer writes, was “exhilarating.” Later, he became the editor of The New Republic. The magazine was on the brink of ruin when, in 2012, it was purchased by Chris Hughes, a co-founder of Facebook, whose personal fortune was estimated at half a billion dollars.

Foer saw Hughes as a “savior,” who could provide, in addition to cash, “an insider’s knowledge of social media” and “a millennial imprimatur.” The two men set out to revitalize the magazine, hiring high-priced talent and redesigning the Web site. Foer recounts that he became so consumed with monitoring traffic to the magazine’s site, using a tool called Chartbeat, that he checked it even while standing at the urinal.

The era of good feeling didn’t last. In the fall of 2014, Foer heard that Hughes had hired someone to replace him, and that this shadow editor was “lunching around New York offering jobs at The New Republic.” Before Hughes had a chance to fire him, Foer quit, and most of the magazine’s editorial staff left with him. “World Without Mind” is a reflection on Foer’s experiences and on the larger forces reshaping American arts and letters, or what’s nowadays often called “content.”

“I hope this book doesn’t come across as fueled by anger, but I don’t want to deny my anger either,” he writes. “The tech companies are destroying something precious. . . . They have eroded the integrity of institutions—media, publishing—that supply the intellectual material that provokes thought and guides democracy. Their most precious asset is our most precious asset, our attention, and they have abused it.”

Much of Foer’s anger, like Taplin’s, is directed at piracy. “Once an underground, amateur pastime,” he writes, “the bootlegging of intellectual property” has become “an accepted business practice.” He points to the Huffington Post, since shortened to HuffPost, which rose to prominence largely by aggregating—or, if you prefer, pilfering—content from publications like the Times and the Washington Post. Then there’s Google Books. Google set out to scan every book in creation and make the volumes available online, without bothering to consult the copyright holders. (The project has been hobbled by lawsuits.) Newspapers and magazines (including this one) have tried to disrupt the disrupters by placing articles behind paywalls, but, Foer contends, in the contest against Big Tech publishers can’t win; the lineup is too lopsided. “When newspapers and magazines require subscriptions to access their pieces, Google and Facebook tend to bury them,” he writes. “Articles protected by stringent paywalls almost never have the popularity that algorithms reward with prominence.”

Foer acknowledges that prominence and popularity have always mattered in publishing. In every generation, the primary business of journalism has been to stay in business. In the nineteen-eighties, Dick Stolley, the founding editor of People, developed what might be thought of as an algorithm for the pre-digital age. It was a formula for picking cover images, and it ran as follows: Young is better than old. Pretty is better than ugly. Rich is better than poor. Movies are better than music. Music is better than television. Television is better than sports. And anything is better than politics.

But Stolley’s Law is to Chartbeat what a Boy Scout’s compass is to G.P.S. It is now possible to determine not just which covers sell magazines but which articles are getting the most traction, who’s e-mailing and tweeting them, and how long individual readers are sticking with them before clicking away. This sort of detailed information, combined with the pressure to generate traffic, has resulted in what Foer sees as a golden age of banality. He cites the “memorable yet utterly forgettable example” of Cecil the lion. In 2015, Cecil was shot with an arrow outside Hwange National Park, in Zimbabwe, by a dentist from Minnesota. For whatever reason, the killing went viral and, according to Foer, “every news organization” (including, once again, this one) rushed to get in on the story, “so it could scrape some traffic from it.” He lists with evident scorn the titles of posts from Vox—“Eating Chicken Is Morally Worse Than Killing Cecil the Lion”—and The Atlantic’s Web site: “From Cecil the Lion to Climate Change: A Perfect Storm of Outrage.” (In July, Cecil’s son, Xanda, was shot, prompting another digital outpouring.)

Donald Trump, Foer argues, represents “the culmination” of this trend. In the lead-up to the campaign, Trump’s politics, such as they were, consisted of empty and outrageous claims. Although none deserved to be taken seriously, many had that coveted viral something. Trump’s utterances as a candidate were equally appalling, but on the Internet apparently nobody knows you’re a demagogue. “Trump began as Cecil the Lion, and then ended up president of the United States,” Foer writes.

Both Taplin and Foer begin their books with a discussion of the early days of personal computers, when the Web was still a Pynchonesque fantasy and lots of smart people believed that connecting the world’s PCs would lead to a more peaceful, just, and groovy society. Both cite Stewart Brand, who, after hanging out with Ken Kesey, dropping a lot of acid, and editing “The Whole Earth Catalog,” went on to create one of the first virtual networks, the Whole Earth ’Lectronic Link, otherwise known as well.

In an influential piece that appeared in Rolling Stone in 1972, Brand prophesied that, when computers became widely available, everyone would become a “computer bum” and “more empowered as individuals and co-operators.” This, he further predicted, could enhance “the richness and rigor of spontaneous creation and human interaction.” No longer would it be the editors at the Timesand the Washington Post and the producers at CBS News who decided what the public did (or didn’t) learn. No longer would the suits at the entertainment companies determine what the public did (or didn’t) hear.

“The Internet was supposed to be a boon for artists,” Taplin observes. “It was supposed to eliminate the ‘gatekeepers’—the big studios and record companies that decide which movies and music get widespread distribution.” Silicon Valley, Foer writes, was supposed to be a liberating force—“the disruptive agent that shatters the grip of the sclerotic, self-perpetuating mediocrity that constitutes the American elite.”

The Internet revolution has, indeed, sent heads rolling, as legions of bookstore owners, music critics, and cirrhotic editors can attest. But Brand’s dream, Taplin and Foer argue, has not been realized. Google, Amazon, Facebook, and Apple—Europeans refer to the group simply as gafa—didn’t eliminate the gatekeepers; they took their place. Instead of becoming more egalitarian, the country has become less so: the gap between America’s rich and poor grows ever wider. Meanwhile, politically, the nation has lurched to the right. In Foer’s telling, it would be a lot easier to fix an election these days than it was in 1876, and a lot harder for anyone to know about it. All the Big Tech firms would have to do is tinker with some algorithms. They have become, Foer writes, “the most imposing gatekeepers in human history.”

This is a simple, satisfying narrative, and it allows Taplin and Foer to focus their ire on GAFA gazillionaires, like Zuckerberg and Larry Page. But, as an account of the “unpresidented” world in which we live, it seems to miss the point. Say what you will about Silicon Valley, most of its major players backed Hillary Clinton. This is confirmed by campaign-finance filings and, as it happens, by the Russian hack of Democratic National Committee e-mails. “I hope you are well—thinking of all of you often and following every move!” Facebook’s chief operating officer, Sheryl Sandberg, wrote to Clinton’s campaign chairman, John Podesta, at one point.

It is troubling that Facebook, Google, and Amazon have managed to grab for themselves such a large share of online revenue while relying on content created by others. Quite possibly, it is also anti-competitive. Still, it seems a stretch to blame gafa for the popularity of listicles or fake news.

Last fall, some Times reporters went looking for the source of a stream of largely fabricated pro-Trump stories that had run on a Web site called Departed. They traced them to a twenty-two-year-old computer-science student in Tbilisi named Beqa Latsabidze. He told the Times that he had begun the election season by pumping out flattering stories about Hillary Clinton, but the site hadn’t generated much interest. When he switched to pro-Trump nonsense, traffic had soared, and so had the site’s revenues. “For me, this is all about income,” Latsabidze said. Perhaps the real problem is not that Brand’s prophecy failed but that it came true. A “computer bum” sitting in Tbilisi is now so “empowered” as an individual that he can help turn an election halfway around the world.

Either out of conviction or simply out of habit, the gatekeepers of yore set a certain tone. They waved through news about state budget deficits and arms-control talks, while impeding the flow of loony conspiracy theories. Now Chartbeat allows everyone to see just how many (or, more to the point, how few) readers there really are for that report on the drought in South Sudan or that article on monopoly power and the Internet. And so it follows that there will be fewer such reports and fewer such articles. The Web is designed to give people what they want, which, for better or worse, is also the function of democracy.

Post-Cecil, post-fact, and mid-Trump, is there anything to be done? Taplin proposes a few fixes. To start, he wants the federal government to treat companies like Google and Facebook as monopolies and regulate them accordingly. (Relying on similar thinking, regulators in the European Union recently slapped Google with a $2.7-billion fine.)

Taplin notes that, in the late nineteen-forties, the U.S. Department of Justice went after A.T. & T., the Google of its day, for violating the Sherman Antitrust Act. The consent decree in the case, signed in 1956, compelled A.T. & T. to license all the patents owned by its research arm, Bell Labs, for a small fee. (One of the technologies affected by the decree was the transistor, which later proved essential to computers.) Google, he argues, could be similarly compelled to license its thousands of patents, including those for search algorithms, cell-phone operating systems, self-driving cars, smart thermostats, advertising exchanges, and virtual-reality platforms.

“It would seem that such a licensing program would be totally in line with Google’s stated ‘Don’t be evil’ corporate philosophy,” Taplin writes. At the same time, he urges musicians and filmmakers to take matters into their own hands by establishing their own distribution networks, along the lines of Magnum Photos, formed by Robert Capa, Henri Cartier-Bresson, and others in 1947.

“What if artists ran a video and audio streaming site as a nonprofit cooperative (perhaps employing the technology in some of those free Google patents)?” he asks at one point. “I have no illusion that the existing business structures of cultural marketing will go away,” he observes at another. “But my hope is that we can build a parallel structure that will benefit all creators.”

Foer prefers the model of artisanal cheesemakers. ( “World Without Mind” apparently went to press before Amazon announced its intention to buy Whole Foods.) “The culture industries need to present themselves as the organic alternative, a symbol of status and aspiration,” he writes. “Subscriptions are the route away from the aisles of clickbait.” Just after the election, he notes, the Times added more than a hundred thousand new subscribers by marketing itself as a fake-news antidote. And, as an act of personal resistance, he suggests picking up a book. “If the tech companies hope to absorb the totality of human existence,” he writes, “then reading on paper is one of the few slivers of life that they can’t fully integrate.”

These remedies are all backward-looking. They take as a point of reference a world that has vanished, or is about to. (If Amazon has its way, even artisanal cheese will soon be delivered by drone.) Depending on how you look at things, this is either a strange place for meditations about the future to end up or a predictable one. People who worry about the fate of democracy still write (and read) books. Those who are determining it prefer to tweet. ♦

This article appears in other versions of the August 28, 2017, issue, with the headline “The Content of No Content.”

Source: This article was published newyorker.com By Elizabeth Kolbert

Microsoft has created a new research lab with a focus on developing general-purpose artificial intelligence technology, the company revealed today. The lab will be located at Microsoft’s Redmond HQ, and will include a team of more than 100 scientists working on AI, from areas including natural language processing, learning and perception systems.

The aim of building a general-purpose AI that can effectively address problems in a range of different areas, rather than focusing on a single specific task, is one that many leading technology companies are pursuing. Most notably, perhaps, Google is attempting to tackle the challenge of more generalized AI via both its own Google Brain project and through efforts at DeepMind, the company it acquired in 2014, which is now its own subsidiary under mutual parent company Alphabet.

Microsoft’s new endeavor is called Microsoft Research AI, and it’ll pull from existing AI expertise at the company, as well as pursue new hires, including experts in related fields such as cognitive psychology to flesh out the team, Bloomberg says. The lab will also formally partner with MIT’s Center for Brains, Minds and Machines. Seeking academic-private tie-ups is not at all unusual in AI development — Microsoft, Google and others, including Uber, have made commitments to academic institutions in order to help secure talent and pipeline for students with related expertise.

In addition to the research lab, Microsoft is going to create an AI ethics oversight panel that will act in an advisory capacity across the company, which is also very much in keeping with industry trends. Microsoft previously signed on to work with DeepMind, Amazon, Google, Facebook and IBM on a cross-company partnership for ethical AI development, and Google and DeepMind also have their own AI ethics board.

Source: This article was published techcrunch.com By Darrell Etherington

Google is killing the 'Google Now' name but improving the underlying functionality to make it more controllable, engaging -- and searchable.

Google Now was launched at Google I/O in June 2012. It was part of a package of updates and UI changes for mobile search, which included a female-voiced mobile assistant to compete with Apple’s Siri.

Google Now was initially a way to get contextually relevant information based on location, time of day and your calendar. It evolved to become much more sophisticated and elaborate, with a wide array of content categories delivered on cards. For a time it was being called “predictive search,” although that term has faded.

Now was billed as a way to get information on your smartphone without actively searching for it. It was heralded by some as the future of mobile search.

The ‘feed experience’ improves

Today Google is officially killing the “Google Now” brand. It’s not getting rid of the functionality, however. That will remain and is being upgraded with an improved design and some new features, including reciprocal connections between search and your personalized content feed.

Last December, Google introduced a new “feed experience” as part of Google Now, which featured topics in one tab and a second tab for personal information and updates, such as travel plans and meetings. In the rollout today, (Google app for Android and iOS), that two-tab structure is preserved, but the feed is becoming richer and more controllable. The rollout is US only, with international markets happening in the coming weeks.

Users will be able to follow content directly from mobile search results and have that surface on an ongoing basis in their feeds. A new “follow” button will appear in some contexts, as the image above illustrates. However, most content that appears in the feed will still be determined algorithmically, based on search history and engagement with other Google properties such as YouTube.

There will also apparently be some content from locally trending topics. However, that trending content is not based on user contacts or social connections.

At a briefing Tuesday in San Francisco, the Google Team, lead by Ben Gomes, was asked several times about how these changes compared to the Facebook News Feed. The answer was: this is about you and your interests, not topics your friends are engaged with.

Intensity of user interests to be reflected

The specific topics and cards that appear are also being calibrated to reflect the intensity of your interests. If you’re more interested in travel or hip-hop or bike racing than cooking or boxing or art, that will be reflected and emphasized in your feed accordingly. In other words, interest level will be captured.

Google indicated that it will also be easy to unfollow topics: “Just tap on a given card in your feed or visit your Google app settings.” And, of course, as the company’s blog post asserts, “The more you use Google, the better your feed will be.”

Perhaps most interesting, from a “search” perspective, is that every card will have a header that will be able to initiate a mobile search with a tap. That wasn’t possible with Google Now. Thus there’s a feedback loop of sorts: search results can be followed, feed content can be searched.

It’s very much in Google’s interest to build products that keep the brand and some version of search in front of mobile users throughout the day. But Google is also trying to improve upon Now as a product, even as it gets rid of that name.

‘Vast majority’ of queries now mobile

Gomes said during the briefing that the “vast majority of our queries come from mobile.” Obviously, Google has very successfully transitioned to mobile, which wasn’t a foregone conclusion. Now it wants to give users more reasons to check in daily and new pathways into search. It’s not clear how widely Now was being used by the bulk of Google’s mobile audience.

Beyond the mobile app experience, Google said that it would be bringing the feed to the desktop version of the Chrome in the near future, though it didn’t show that off. I’m imagining it as the reincarnation of iGoogle, a personalized start page that was shuttered in 2012 — the same year Now was introduced.

Source: This article was published searchengineland By Greg Sterlin

Google’s Uptime, an experimental app that enables people to watch YouTube videos with friends, is now available to everyone who has access to the US iOS App Store.

Uptime initially launched earlier this year and was created by Google’s internal incubator, Area 120. Google’s Area 120 program encourages Google employees to spend 20% of their time working on projects that are not directly related to their job. Uptime is one of many projects to have been launched through the Area 120 program.

When Uptime initially launched it required an invite code in order to use it, but now anyone is free to download it. Those using the app can connect with their Facebook account to find other friends using the app. Connections can also be made by following others within the app.

People can use Uptime to watch YouTube videos with friends in real-time, or they can be viewed at a later time while still being able to see friends’ reactions to the video. Reactions consist of various emoji that can be tapped on while watching a video, similar to other live video streaming services.

Since the launch of Uptime earlier this year, others have been trying to imitate the idea with apps like Cabana, Let’s Watch It, Fam, and so on. The number of competing apps to enter the marketplace may have spurred the decision to launch Uptime more widely.

Despite Area 120 apps technically falling under the Google umbrella, they are not branded by Google in the App Store nor do they receive much promotion from the company. It will be interesting to see if that changes in light of competing apps gaining traction as of late.

Uptime can be downloaded from the US iOS App Store here.

Source: This article was published searchenginejournal By Matt Southern

The internet has changed the way we discover and consume information. Think about the year 2000 — you put a keyword in the search bar and websites with the highest keyword concentration were the ones that appeared on top. Things gradually changed with the Panda, Penguin, Pigeon and other updates. The focus was more on quality of content. The search industry was further revolutionized with the introduction Google’s Instant Results around 2010. People were excited to see how the search engine offered relevant results just by reading the first few letters of a keyword.

Fast forward to 2017, the search engines have become even smarter. Their only focus is at offering the most relevant and useful information based on user preferences. Enter content discovery! Marketers are now keen on making brand content discoverable to ensure better awareness and traffic.

But what is all this buzz about content discovery? Let us take a look.

What is Content Discovery?

Content discovery is the art and science of using predictive algorithms to help make content recommendations based on how people search. Search engines and various other platforms are now using artificial intelligence (AI) to understand customer preferences and interests. This helps users to find content that’s most suitable for them.  

To understand what content discovery is all about, let us review some examples. Social media sites such as Facebook have content discovery features integrated into its algorithms. Consider the News Feed offered by the social platform. The content that appears in an individual feed is offered according to each users past behavior and personal preferences. In fact, a survey carried out by Forrester Consulting found social media to be the most preferred source of discovery for news and information among online adults between the ages of 18 and 55. The survey also revealed that a young millennial follows an average of 121 publishers on the social media.

Similar to Facebook, YouTube’s “Recommended for You” section is another example of how user activity and preferences fuel content discovery. 

Why is Content Discovery Important?

Content discovery has become more important than ever. This is because the amount of online content is increasing exponentially. Almost every brand is creating content to offer value for their audiences, which means it’s even more difficult for people to find the information they are looking for. Content discovery allows people to find information that is highly relevant and personalized. In fact, content discovery helps both consumers and online marketers. Here is how:

  • Consumers find desired data/information quickly without having to scour through hundreds and thousands of search results.
  • Online marketers can put relevant content in front of their targeted audience at the right time through the right channels.

Content discovery, helps people weed out irrelevant and unimportant information. The next question is how can brands, publishers and advertisers benefit from content discovery? 

How Content Discovery Helps Brands, Publishers and Advertisers

Marketers spend time creating high-quality content, sharing it across various channels but often fail to get the desired attention they seek. Why? There are high chances that the content gets lost in the deluge. So what can marketers do to ensure content gets seen? Here is what leaders are doing to get attention and expand their reach.

Brands, publishers and advertisers are leveraging various content discovery platforms such as TaboolaOutbrainCuriyoRenoun, and others to expand their reach and improve ROI. The content discovery tools allow the marketers to offer high-level engagement (through high-quality and relevant content), which offers them with ample opportunities to monetize and capitalize on the user engagement levels.

The platforms analyze user behaviors by considering a number of metrics such as time spent on a specific site, the path taken to reach a specific content source, search habits, preferences and others. These useful insights can be used to target content and advertisement campaigns and ensure better lead generation.

Content discovery has become even more important for brands as consumers prefer information on their mobile devices rather than on their PCs and laptops. According to Statista, the number of smartphone users across the world has increased from 1.5 billion in 2014 to 2.17 billion in 2016. The number of smartphone users are expected to rise up-to 2.87 billion by 2020. Moreover, about 20 percent millennials are no longer using their desktop to access the Web. 

content discovery.png


With a number of content discovery platforms available to the marketers, publishers and advertisers, they need not worry about a user leaving their site/blog to find further information on their preferred topics.

Here are some quick tips that will help marketers present the most relevant content in front of their targeted audience:

1. Offer Users Quality Content

Marketers must focus on providing their audience with a lot of high-quality content instead of focusing on creating just a one-off piece. Different types of content appeal to different audiences, so you have to make sure you cater to a larger audience. There have been instances when a single piece of content has gotten a huge amount of attention and helped the brand to grow immensely, but that is only momentary. By creating quality content you can reap benefits for a longer period of time.

2. Focus on Multi-Channel Strategies

Facebook can help you get a lot of attention, but the audience it caters to is not multidimensional. To expand your reach, you need to think about leveraging other channels as well. So, by designing a comprehensive content marketing strategy that includes search marketing, Facebook marketing, Twitter marketing, Instagram marketing, etc., you can benefit a lot. Once you get started you can measure how each channel is contributing to the campaign and tweak your strategies accordingly.

3. Help Others and They Will Help You

Marketers often make the mistake of promoting their own content only. By sharing high-quality content from other sources or publishers, you can offer variety to your audiences and they will be more likely to come to your site when they need information. As the saying goes, “you get what you give,” there are chances where other publishers will also be willing to share your content or mention it on their blogs/websites which will help in increasing your reach.

4. Create Content That Your Audience Will Love

Who does not love instant success? But when it comes to content marketing you can never expect overnight success. The secret is to create content that your audience will love for years to come. With a long-term content discovery strategy, you can ensure more benefits. Once your audience is engaged you can move over to measuring the performance of your content and make adjustments accordingly to ensure better results.

5. Strategic Organization Aids Content Discovery

Content silos are harmful for your overall content marketing since it creates dead ends in your engagement path. Content silos form when you group content by date or type (blog posts, videos, etc.). It is a must that you organize your content by topic, since it makes content discovery easier.

While you can make your content discoverable by using search bars, internal links and a content recommendation engine; strategic organization helps users find the most relevant content quickly and easily. Moreover, organizing content by topic, persona or account helps you measure the performance and engagement level easily by using project management tools such as TrelloWorkzoneBasecamp and others. This will help you identify which content performs the best, so you can leverage it further. For instance, if a blog post does extremely well, you can then make a video, slideshow and an infographic to get a better momentum.

6. Offer Variety

Do you focus on creating textual content only? Think again. In the modern day, people consume information on the go, so they might not have the luxury to read through all the extensive articles. By offering a mix of content in your feed you can help your audience consume information in the form they prefer the most. This means you need to repurpose your content and convert it into videos, slideshows, podcasts, infographics and other forms. This will help you leverage a variety of channels and reach out to a larger audience.

Create high-quality content and use various content discovery tools to reach out to your targeted audience and maximize your ROI.

Future of Content Discovery

Attention spans are becoming shorter, so people want quick access to relevant information. Therefore, brands need to focus on delivering personalized and focused content to ensure better engagement. By gearing towards this new search model and leveraging the content distribution platforms, brands can fulfill a user’s desire to access quality content within moments.

However, this is just the start for content discovery. Content discovery is still evolving and improving so it is still not ready to replace the generic search completely. A huge number of people still prefer the traditional way to discover their desired content. But with changing user preferences, traditional search will soon become outdated and will be replaced by the modern techniques of content discovery. Therefore, marketers must be prepared to adapt to the changes and satisfy the needs of the customer.


User preferences are changing quickly. Users now want to access relevant and useful information as quickly as possible. This is not possible with the traditional search model since the users need to scour through numerous sources to find relevant information. Thus the rise of content discovery! It helps users find content at the nick of a time.

As content discovery becomes even more popular, more platforms need to be explored and tested to evaluate how they can further benefit the businesses. Online marketers should start integrating content discovery into their content marketing strategies to make sure they can retain their customers and keep satisfying them for a longer period of time.

Author Bio

Pratik Dholakiya is the Co-Founder of E2M, a full service digital marketing agency and PRmention, a digital PR agency. He regularly speaks at various conferences about SEO, Content Marketing, Growth Hacking, Entrepreneurship and Digital PR. Pratik has spoken at NextBigWhat’s UnPluggd, IIT-Bombay, SMX Israel, and other major events across Asia. As a passionate marketer, he shares his thoughts and knowledge on publications like Search Engine Land, Entrepreneur Magazine, Fast Company, The Next Web and the Huffington Post to name a few. He has been named one of the top content marketing influencers by Onalytica three years in a row.

Source: This article was published military-technologies.net

Google just removed 41 apps infected with adware from its Play Store

Forty-one Android apps infected with malicious software were removed from the Google Play Store on Thursday, but cybersecurity experts believe that up to 36.5 million people may have downloaded the "auto-clicking adware."

Dubbed "Judy," the malware was published by South Korean gaming studio Kiniwini under the name ENISTUDIO Corp. It's unclear how the malicious code got there - criminal third parties or the company itself.

According to Tel Aviv-based cybersecurity company Check Point, the apps have been available in Google's Play Store for years, though the length of infection hasn't been determined.

"These apps also had a large amount of downloads between four and 18 million, meaning the total spread of the malware may have reached between 8.5 and 36.5 million users," the company explained Thursday.

"The malware uses infected devices to generate large amounts of fraudulent clicks on advertisements, generating revenues for the perpetrators behind it," Checkpoint added.

Applications infiltrated with malware are becoming problematic for Android app developers and consumers. As of last spring, an estimated 1.3 to 1.4 billion people owned Android phones, which are easier to infiltrate than iOS-based devices. The Google-developed operating system is "more open and adaptable," said security software company Sophos.

Apps featured in Apple's iOS store have gone through an in-depth examination. The thorough vetting process blocks "widespread malware infection" among iPhone users, though malicious software targeting Apple devices is on the rise, according to a report from SIXGILL.

Earlier this month, Google revealed "Play Protect," a service that scans Android devices "around the clock" to ensure proper protection.

A full list of the apps' package names and upload dates be seen here.

The following apps were infected:

Animal Judy: Persian Cat Care
Fashion Judy: Pretty Rapper
Fashion Judy: Teacher Style
Animal Judy: Dragon Care
Chef Judy: Halloween Cookies
Fashion Judy: Wedding Party
Animal Judy: Teddy Bear Care
Fashion Judy: Bunny Girl Style
Fashion Judy: Frozen Princess
Chef Judy: Triangular Kimbap
Chef Judy: Udong Maker – Cook
Fashion Judy: Uniform Style
Animal Judy: Rabbit Care
Fashion Judy: Vampire Style
Animal Judy: Nine-Tailed Fox
Chef Judy: Jelly Maker – Cook
Chef Judy: Chicken Maker
Animal Judy: Sea Otter Care
Animal Judy: Elephant Care
Judy’s Happy House

Chef Judy: Hot Dog Maker – Cook
Chef Judy: Birthday Food Maker
Fashion Judy: Wedding Day
Fashion Judy: Waitress Style
Chef Judy: Character Lunch
Chef Judy: Picnic Lunch Maker
Animal Judy: Rudolph Care
Judy’s Hospital: Pediatrics
Fashion Judy: Country Style
Animal Judy: Feral Cat Care
Fashion Judy: Twice Style
Fashion Judy: Myth Style
Animal Judy: Fennec Fox Care
Animal Judy: Dog Care
Fashion Judy: Couple Style
Animal Judy: Cat Care
Fashion Judy: Halloween Style
Fashion Judy: EXO Style
Chef Judy: Dalgona Maker
Chef Judy: Service Station Food
Judy’s Spa Salon

Source: This article was published wtae.com By Abigail Elise

Researchers from UC Santa Barbara and Georgia Tech have discovered a fresh class of Android attacks, called Cloak and Dagger, that can operate secretly on a phone, allowing hackers to log keystrokes, install software and otherwise control a device without alerting its owner. Cloak and Dagger exploits take advantage of the Android UI, and they require just two permissions to get rolling: SYSTEM ALERT WINDOW ("draw on top") and BIND ACCESSIBILITY SERVICE ("a11y").

This concerns researchers because Android automatically grants the draw-on-top permission for any app downloaded from the Play Store, and once a hacker is in, it's possible to trick someone into granting the a11y permission. A Cloak and Dagger-enabled app hides a layer of malicious activity under seemingly harmless visuals, luring users to click on unseen buttons and keystroke loggers.

"To make things worse, we noticed that the accessibility app can inject the events, unlock the phone, and interact with any other app while the phone screen remains off," the researchers write. "That is, an attacker can perform a series of malicious operations with the screen completely off and, at the end, it can lock the phone back, leaving the user completely in the dark."

Google is aware of the exploit.

"We've been in close touch with the researchers and, as always, we appreciate their efforts to help keep our users safer," a spokesperson says. "We have updated Google Play Protect -- our security services on all Android devices with Google Play -- to detect and prevent the installation of these apps. Prior to this report, we had already built new security protections into Android O that will further strengthen our protection from these issues, moving forward."

One of the researchers, Yanick Fratantonio, tells TechCrunch the recent updates to Android O might address Cloak and Dagger, and the team will test it out and update its website accordingly. For now, he says, don't download random apps and keep an eye on those permissions.

Source: This article was published engadget.com By Jessica Conditt

These days it's hard to come across a country where several languages aren't accepted and spoken. No matter what language you speak, everybody has access to the internet. Why not take advantage of this commonly known fact when it comes to optimizing your site and go multilingual?

Research shows that people prefer to make purchases when browsing the web in their native tongue. In fact, over half of the people surveyed (52.4 per cent to be exact) bought from sites that were only in their own language. Furthermore, 85.3 per cent of those asked required pre-purchase information in their preferred language when making important decisions online like buying insurance.

Over two-thirds of international buyers visit English-language websites around once a month. However, nearly three quarters of those visitors don't make purchases if they can't use their own credit cards or local currency (even if information is available in their language!).

According to a survey by Content Marketing World, out of 500 participating marketers 60 percent admit to lacking multilingual content marketing strategies. If you're a company or marketer lacking in the multilingual department, now is your chance to get ahead of the competition and go global!

Best Practices

Don't know where to start? Here are some helpful tips provided by Google:

"Utiliser une langue par page": Don't confuse your potential customers by showing off your multilingual skills all at the same time. Use one language per page and keep everybody happy and in the loop.

Shell out the dough and get a real person to translate your content. Google translate may seem good, but you can't count on artificial intelligence to always catch the subtle intricacies of language (but your customers who speak that language definitely will).

Keep your URLs simple - instead of changing the whole URL between two different language pages, just add a snippet of text to show the pages are different: www.bestblogever.com/en/thebest VS www.bestblogever.com/fr/thebest

Don't make assumptions - your customers might be cruising the web in Montreal, but that doesn't mean they can speak French. Instead of automatically redirecting based on location or perceived language, provide clearly labeled links between your content in different languages so your consumers can make the decision for themselves. Cross linking pages when localizing also makes life easier for our friend Googlebot.

Between translating content you are sure to end up with some overlap in what you're trying to communicate, AKA duplicate content. However Google says that similar content in varying languages is acceptable as long as your content is for different users in different countries with unique URLs.

In terms of deciding which languages you might want to delve into first, research shows these are the top ten languages being translated:

  1. French
  2. Spanish (Latin America)
  3. German
  4. Chinese
  5. Japanese
  6. Spanish (US)
  7. Portuguese (Brazil)
  8. French (Canada)
  9. Italian
  10. Spanish (Spain)

If your company is looking to reach a larger, national or worldwide audience consider making your website multilingual. It's not for every business, but if you can take advantage of a wider audience by translating your amazing site content it is worth the time and investment.

Source: This article was published searchenginepeople.com

Google has been quietly rolling out new features and updates to Google My Business over the last several months, and columnist Joy Hawkins has compiled these underreported changes.

We all know that Google is constantly launching updates to their products (over 1,600 last year), and some of these changes are well covered and some slip by unnoticed. I have quietly been keeping track of some of the major changes I’ve noticed so far this year that would impact those of us who work in Local SEO and wanted to share my observations.

1. Google removes permanently closed listings from the Local Finder

If you look at the picture from my article last year about permanently closed listings, you’ll see that there used to be tons of “permanently closed” listings ranking in the Local Finder. They would typically show up at the end of the list (after the open ones), and if you edited a ranking listing to make it appear permanently closed, it would instantly drop to the back of the list.

I haven’t seen a single “permanently closed” listing in the Local Finder in months. This is mostly a good thing, since they aren’t overly useful for users.

The “permanently closed” label is problematic for local SEOs in a couple of scenarios. The first would be for businesses that have practitioners. These are the industries that are most likely to have “permanently closed” listings floating out there that they don’t know about for their practitioners. They are now harder to find, but customers could still be seeing them while searching on Google.

Tip: Search for all your existing and former practitioners on Google by name + city and make sure you don’t have any of these out there.

The second problematic scenario would be when spammers start marking competitors as “permanently closed” to make them disappear. You won’t get any type of notification from Google when this happens to your business (Thanks, Google!) unless you visit your Google My Business dashboard daily.

Tip: Since not everyone has time to do that, my suggestion would be to utilize a ranking tracker that also sends you alerts when they notice changes on your listing, like BrightLocal does.

2. Google removes the ability to access the classic version of Google Plus

Google came out with the new version of Google Plus back in 2015, but up until a couple of months ago, they still kept the classic version accessible — and it was the version that Google cached in their search results.

Why does this matter for local SEOs? The classic version had all the Name, Address, Phone Number (NAP) data on it that we loved so much, and the new version gives you none of this info. Many of us used the site:plus.google.com search to find duplicate listings for clients, and this function no longer works, since the cached version of Google Plus has no phone number, no address and no reviews.

Tip: Unless you’re using Google Plus for posts that get lots of engagement, you don’t have much else to do over there, since it’s now almost completely divorced from Google My Business. Posting random links to your blog articles won’t help you unless they get shares, +1s or comments.

3. Google launches a platform for reviewing edits to business listings on Google Maps on Desktop

Since MapMaker shut down, lots of people are under the impression that reviewing edits to business listings is no longer possible. Google has had the ability to review edits on the Google Maps app for quite some time, but since those of us in the local SEO industry rarely sit around doing client work on our phones, lots of people don’t realize this is possible. In March, Google also launched a “Check the Facts” feature on Desktop for Local Guides. This is a very simplified version of editing and isn’t really comparable to what we used to have on MapMaker, but it does allow for users to approve or deny each other’s edits to business listings. When this first launched, it was only available to Local Guides who were a Level 5 but rolled out to all Local Guides levels a few weeks later.  

4. Google removes pending edits for a listing’s status from showing up on the Google Maps app

I outlined in this article how spammers were attacking legit business listings by reporting their listings as spam just to get the pending status to show up on their listing on mobile. Indeed, these spammers shifted their focus to Trump Tower at one point, and searching for it on the Google Maps app produced the following listing:

Google removed pending edits for a listing’s status shortly after I wrote that article, so now if someone reports you as spam, you don’t have to worry unless the edit actually publishes.

5. Google rolls out the Snack Pack to more industries in the USA

Different from the 3-pack, the “Snack Pack” refers to the local layout that that is missing the links to the business website or driving directions; instead of seeing these (useful) buttons, you get an image.

For some reason, Google decided earlier this year that all of us who search on Google would love to see pictures of bugs when searching for pest control instead of a website that would tell us more about the company we’re potentially hiring.

Mike Blumenthal pointed out that in addition to pest control, jewelers and sporting goods stores also now have this layout.

6. Businesses can now access 18 months of data from Insights inside their Google My Business dashboard

In April, Google added bulk insights to the dashboard, which might look unimpressive at first if you don’t catch the fact that you can now select a custom date range for data and aren’t stuck looking at one-week, one-month, or one-quarter intervals! This is a huge plus for agencies who onboard new clients and want the ability to see how their stats look before they start improving things.

Tip: Compare data year over year instead of month over month. I find this gives a much more accurate picture of improvement, especially for seasonal businesses.

7. Google starts actively showing local pack ads on mobile

I first heard that this AdWords feature was coming last year while at SMX Advanced in Seattle. After tweeting a really blurry picture of the feature, we heard very little from Google about it until I noticed the ads starting to show up everywhere a couple months ago. Here is a picture of what they look like that I took using Mobile Moxie’s Mobile Search Simulator:

Notice anything else?

Tons of other updates have happened in the last few months, but I wanted to highlight the ones I found got very little coverage that you might have missed. Were there any other major things you noticed that didn’t make the list?

Source: This article was published searchengineland By Joy Hawkins


You have no idea kid!

This is the image of New York taken from the International Space Station, 400 km away from any point on Earth (that is directly under it) and travelling at 27000 km/h.

An image of New York taken from the International Space Station. Photo: Quora

So I heard you say low-res camera? You must be blind if you call this low-res in spite of the distance and speed of the ISS.

First off, the camera specifications are driven by science and system requirements. If we need a high-resolution camera, the science that we want to do with it shall require it have such a resolution. Otherwise, we are wasting mass and power, two of the most precious resources for spacecrafts.

And did I hear you say “no video”?

Have you heard of the High Definition Earth-Viewing System (HDEV) placed on the ISS? Here is a link:

live stream
live stream

A live stream of Earth’s view from the International Space Station. Photo: Quora

This is the live stream of Earth’s view from the International Space Station.

This space is too short to mention the entire specifications of cameras used by NASA spacecrafts.

If you are interested in Voyager 1’s wide angle camera, check these specs: Ring-Moon Systems Node.

Voyager 1 was launched in 1977 and here are some of the images taken by it.

Voyager 1 was launched in 1977 and this is an image taken by it. Photo: Quora Voyager 1 was launched in 1977 and this is an image taken by it. Photo: Quora
Additionally, these images have to be transmitted with the same quality from over 10 Astronomical Units distance. Storage space, bandwidth of transfer, power requirements, and many other things come into play before deciding upon camera resolution.From the question description:Are we to believe that nasa spends years and billions on planet exploration probes only to equip them with crappie, low res cameras and no video?No, you just haven’t done your research quite well.
Source: This article was published yahoo.com By Karthik Venkatesh


World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.