Web Directories

Linda Manly

Linda Manly

JACKSONVILLE, Fla., Feb. 7, 2017 /PRNewswire/ -- A majority of small business owners now embrace the use of online marketing channels like websites and social media to grow their company's reputation and their revenue, but many have not harnessed the full potential of their online presence and may be leaving money on the table. This is according to a report released today by Web.com (Nasdaq: WEB), a leading provider of Internet services and online marketing solutions for small businesses, and Dr. David Ricketts, Innovation Fellow in the Technology and Entrepreneurship Center at Harvard.

"The good news is that our survey shows nearly two-thirds of small business owners truly believe having an online presence will help them grow revenue and attract new customers. However, our report also indicates those same small businesses find it difficult to keep up with the continually changing dynamic of the web," said David Brown, chairman, CEO and president of Web.com.

"In 2017, it is no longer enough for small businesses to be online with only a simple website. They need marketing tools like search engine optimization (SEO), which will get them to the top of search lists, and they need business-specific social media channels that engage customers. The real challenge they face is not knowing where to find the online marketing help they require so that they can focus on running the business."


The first-ever Web.com Small Business Digital Trends Reportsurveyed small business owners nationwide to learn how they are using online channels to grow their businesses. The survey includes small business owners in a variety of professions, including dentists, contractors, artists, welders, hair salon owners, dry cleaners, and retailers with ecommerce sites.

Online Marketing: Using Basic Tools but Not All Necessary Tools

Despite the number of small business owners now embracing online marketing, only 54 percent of small business owners report they are very confident that their business's online presence is doing the job it's supposed to do. A deeper dive into the data shows small business owners have not yet tapped into the full suite of online marketing tools that are needed today to attract their next customer:

  • Only 17 percent of small business owners will be investing in SEO in 2017
  • 42 percent of small business owners admit they don't use both a robust website and social media channels to market their business
  • Only 12 percent cited that the main purpose of their website was for e-commerce, yet 31 percent of respondents identified themselves as a retail business
  • 26 percent admit to having only a single-page website
  • 43 percent say they have no plans to change or improve their online presence in 2017
  • 85 percent are hitting some kind of roadblock when attempting to use social media to promote their business

A majority of small business respondents (68 percent) say they are handling the building and maintenance of their online presence entirely in-house or on their own, compared to 22 percent who outsource this work to an online marketing firm, and 9 percent who solicit help from friends and family. When asked the number one area in online marketing they need help with in order to meet their top business priorities, the most popular answers were online advertising (29 percent) and website maintenance or expansion (26 percent).

Search Engine Optimization: An Underutilized Asset for Small Business Owners

Although SEO can help businesses stand out from their competitors, most small business owners are not citing it as a priority. Only 17 percent say they plan to add SEO to their online marketing strategy in 2017, and only 5 percent of respondents consider SEO a top priority for the year. The low number of respondents investing in SEO in 2017 may indicate a lack of awareness of the changing but important role of SEO and where to get help implementing it.

"Whatever a business's strategy is to reach customers – whether that's via social media, a website, or a combination of different strategies – small business owners need to stay focused on their main objective: to get their business in front of the right customers," says Brown. "The best way for small business owners to be found is to first identify what's unique and special about their business, and then make sure their site is visible using the right keywords and effective SEO."

Online Security and Small Business: Security Breaches May Go Undetected

An overwhelming majority of small business owners (81 percent) say their webpages are secure or very secure – but they may be unaware of vulnerabilities that could compromise their businesses. A report from online security company Whitehat Security finds that 86 percent of all websites have at least one vulnerability, indicating small business owners may be putting too much faith in their current cyber protections.

"When addressing cybersecurity, the primary question is – do these small business owners truly know if they are secure or not?" says Dr. Ricketts. "Security breaches can go undetected by even the largest of organizations, so small business owners may unknowingly be at risk."

Small Business Owners Are Hitting Roadblocks with Social Media

Small business owners are overwhelmingly (88 percent) embracing social media to support their businesses and half (54 percent) plan to invest in social media in 2017. However, a clear majority (85 percent) reported encountering challenges or roadblocks when using social media to market their business. These include:

  • Concern of reputational risks (15 percent)
  • Being overwhelmed with the upkeep, including the need to constantly develop interesting content (14 percent)
  • Lack of understanding of how social media will help their business (13 percent)
  • Knowledge of how to set up social media channels so they integrate with their business (10 percent)

Additionally, 23 percent of respondents admit they only use their personal social media handles to market their business.

When asked which social media platform was most effective for their business, Facebook emerged as the clear winner, ranking as the 'most effective' four times more than any other social media channel. Twitter was the next most effective platform, ranking ahead of channels like Pinterest, LinkedIn, Instagram, Google+ and Snapchat.

Click here to view an infographic on the Web.com Small Business Digital Trends Report findings.


The Web.com Small Business Digital Trends Report surveyed 300 small business owners (defined as a business with 1-500 employees) in the United States regarding their experiences of building and maintaining their online presence.

About Dr. David Ricketts

Dr. David Ricketts received his PhD from Harvard University. He is an Innovation Fellow in the Technology and Entrepreneurship Center at Harvard (TECH). Operating from within the John A. Paulson School of Engineering and Applied Sciences, TECH enables holistic exploration by serving as a crossroads of innovation education. Dr. Ricketts writes and speaks internationally on strategy, trends and best practices in entrepreneurship and innovation.

About Web.com

Web.com Group, Inc. (Nasdaq: WEB) provides a full range of Internet services to small businesses to help them compete and succeed online. Web.com meets the needs of small businesses anywhere along their lifecycle with affordable, subscription-based solutions including domains, hosting, website design and management, search engine optimization, online marketing campaigns, local sales leads, social media, mobile products and ecommerce solutions. For more information, please visit www.web.com; follow Web.com on Twitter @webdotcom or on Facebook at facebook.com/web.com.

Media Contact: Brian Wright 904-371-6856 This email address is being protected from spambots. You need JavaScript enabled to view it.

Source : http://www.prnewswire.com/news-releases/new-webcom-report-indicates-more-than-ever-small-business-owners-embrace-online-marketing-tools-300403490.html


If you Google “Was the Holocaust real?” right now, seven out of the top 10 results will be Holocaust denial sites. If you Google “Was Hitler bad?,” one of the top results is an article titled, “10 Reasons Why Hitler Was One Of The Good Guys.”
In December, responding to weeks of criticism, Google said that it tweaked it algorithm to push down Holocaust denial and anti-Semitic sites. But now, just a month later, their fix clearly hasn’t worked.
In addition to hateful search results, Google has had a similar problem with its “autocompletes” — when Google anticipates the rest of a query from its first word or two. Google autocompletes have often embodied racist and sexist stereotypes. Google image search has also generated biased results, absurdly tagging some photos of black people as “gorillas.”
The result of these horrific search results can be deadly. Google search results reportedly helped shape the racism of Dylann Roof, who murdered nine people in a historically black South Carolina church in 2015. Roof said that when he Googled “black on white crime, the first website I came to was the Council of Conservative Citizens,” which is a white supremacist organization. “I have never been the same since that day,” he said. And of course, in December, a Facebook-fueled fake news story about Hillary Clinton prompted a man to shoot up a pizza parlor in Washington D.C. The fake story reportedly originated in a white supremacist’s tweet.
These terrifying acts of violence and hate are likely to continue if action isn’t taken. Without a transparent curation process, the public has a hard time judging the legitimacy of online sources. In response, a growing movement of academics, journalists and technologists is calling for more algorithmic accountability from Silicon Valley giants. As algorithms take on more importance in all walks of life, they are increasingly a concern of lawmakers. Here are some steps Silicon Valley companies and legislators should take to move toward more transparency and accountability:

1. Obscure content that’s damaging and not of public interest.

When it comes to search results about an individual person’s name, many countries have aggressively forced Google to be more careful in how it provides information. Thanks to the Court of Justice of the European Union, Europeans can now request the removal of certain search results revealing information that is “inadequate, irrelevant, no longer relevant or excessive,” unless there is a greater public interest in being able to find the information via a search on the name of the data subject.
Such removals are a middle ground between information anarchy and censorship. They neither disappear information from the internet (it can be found at the original source) nor allow it to dominate the impression of the aggrieved individual. They are a kind of obscurity that lets ordinary individuals avoid having a single incident indefinitely dominate search results on his or her name. For example, a woman in Spain whose husband was murdered 20 years ago successfully forced Google Spain to take news of the murder off search results on her name.

Such removals are a middle ground between information anarchy and censorship.

2. Label, monitor and explain hate-driven search results.

In 2004, anti-Semites boosted a Holocaust-denial site called “Jewwatch” into the top 10 results for the query “Jew.” Ironically, some of those horrified by the site may have helped by linking to it in order to criticize it. The more a site is linked to, the more prominence Google’s algorithm gives it in search results.
Google responded to complaints by adding a headline at the top of the page entitled “An explanation of our search results.” A web page linked to the headline explained why the offensive site appeared so high in the relevant rankings, thereby distancing Google from the results. The label, however, no longer appears. In Europe and many other countries, lawmakers should consider requiring such labeling in the case of obvious hate speech. To avoid mainstreaming extremism, labels may link to accounts of the history and purpose of groups with innocuous names like “Council of Conservative Citizens.”
In the U.S., this type of regulation may be considered a form of “compelled speech,” barred by the First Amendment. Nevertheless, better labeling practices for food and drugs have escaped First Amendment scrutiny in the U.S., and why should information itself be different? As law professor Mark Patterson has demonstrated, many of our most important sites of commerce are markets for information: search engines are not offering products and services themselves but information about products and services, which may well be decisive in determining which firms and groups fail and which succeed. If they go unregulated, easily manipulated by whoever can afford the best search engine optimization, people may be left at the mercy of unreliable and biased sources.

Better labeling practices for food and drugs have escaped First Amendment scrutiny in the U.S. Why should information itself be different?

3. Audit logs of the data fed into algorithmic systems.

We also need to get to the bottom of how some racist or anti-Semitic groups and individuals are manipulating search. We should require immutable audit logs of the data fed into algorithmic systems. Machine-learning, predictive analytics or algorithms may be too complex for a person to understand, but the data records are not.
A relatively simple set of reforms could vastly increase the ability of entities outside Google and Facebook to determine whether and how the firms’ results and news feeds are being manipulated. There is rarely adequate profit motive for firms themselves to do this — but motivated non-governmental organizations can help them be better guardians of the public sphere.

4. Possibly ban certain content.

In cases where computational reasoning behind search results really is too complex to be understood in conventional narratives or equations intelligible to humans, there is another regulatory approach available: to limit the types of information that can be provided.
Though such an approach would raise constitutional objections in the U.S., nations like France and Germany have outright banned certain Nazi sites and memorabilia. Policymakers should also closely study laws regarding incitement to genocide” to develop guidelines for censoring hate speech with a clear and present danger of causing systematic slaughter or violence against vulnerable groups.

It’s a small price to pay for a public sphere less warped by hatred.

5. Permit limited outside annotations to defamatory posts and hire more humans to judge complaints.

In the U.S. and elsewhere, limited annotations ― rights of reply” ― could be permitted in certain instances of defamation of individuals or groups. Google continues to maintain that it doesn’t want human judgment blurring the autonomy of its algorithms. But even spelling suggestions depend on human judgment, and in fact, Google developed that feature not only by means of algorithms but also through a painstaking, iterative interplay between computer science experts and human beta testers who report on their satisfaction with various results configurations.
It’s true that the policy for alternative spellings can be applied generally and automatically once the testing is over, while racist and anti-Semitic sites might require fresh and independent judgment after each complaint. But that is a small price to pay for a public sphere less warped by hatred.
We should commit to educating users about the nature of search and other automated content curation and creation. Search engine users need media literacy to understand just how unreliable Google can be. But we also need vigilant regulators to protect the vulnerable and police the worst abuses. Truly accountable algorithms will only result from a team effort by theorists and practitioners, lawyers, social scientists, journalists and others. This is an urgent, global cause with committed and mobilized experts ready to help. Let’s hope that both digital behemoths and their regulators are listening.
EDITOR’S NOTE: The WorldPost reached out to Google for comment and received the following from a Google spokesperson.
Search ranking:
Google was built on providing people with high-quality and authoritative results for their search queries. We strive to give users a breadth of content from a variety of sources, and we’re committed to the principle of a free and open web. Understanding which pages on the web best answer a query is a challenging problem, and we don’t always get it right.When non-authoritative information ranks too high in our search results, we develop scalable, automated approaches to fix the problems, rather than manually removing these one-by-one. We are working on improvements to our algorithm that will help surface more high quality, credible content on the web, and we’ll continue to improve our algorithms over time in order to tackle these challenges.
We’ve received a lot of questions about Autocomplete, and we want to help people understand how it works: Autocomplete predictions are algorithmically generated based on users’ search activity and interests. Users search for a wide range of material on the web ― 15 percent of searches we see every day are new. Because of this, terms that appear in Autocomplete may be unexpected or unpleasant. We do our best to prevent offensive terms, like porn and hate speech, from appearing, but we don’t always get it right. Autocomplete isn’t an exact science, and we’re always working to improve our algorithms.
Image search:
Our image search results are a reflection of content from across the web, including the frequency with which types of images appear and the way they’re described online. This means that sometimes unpleasant portrayals of subject matter online can affect what image search results appear for a given query. These results don’t reflect Google’s own opinions or beliefs.
Author : Frank Pasquale

A free AI-based scholarly search engine that aims to outdo Google Scholar is expanding its corpus of papers to cover some 10 million research articles in computer science and neuroscience, its creators announced on 11 November. Since its launch last year, it has been joined by several other AI-based academic search engines, most notably a relaunched effort from computing giant Microsoft.

Semantic Scholar, from the non-profit Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington, unveiled its new format at the Society for Neuroscience annual meeting in San Diego. Some scientists who were given an early view of the site are impressed. “This is a game changer,” says Andrew Huberman, a neurobiologist at Stanford University, California. “It leads you through what is otherwise a pretty dense jungle of information.”

The search engine first launched in November 2015, promising to sort and rank academic papers using a more sophisticated understanding of their content and context. The popular Google Scholarhas access to about 200 million documents and can scan articles that are behind paywalls, but it searches merely by keywords. By contrast, Semantic Scholar can, for example, assess which citations to a paper are most meaningful, and rank papers by how quickly citations are rising—a measure of how ‘hot’ they are.

When first launched, Semantic Scholar was restricted to 3 million papers in the field of computer science. Thanks in part to a collaboration with AI2’s sister organization, the Allen Institute for Brain Science, the site has now added millions more papers and new filters catering specifically for neurology and medicine; these filters enable searches based, for example, on which part of the brain part of the brain or cell type a paper investigates, which model organisms were studied and what methodologies were used. Next year, AI2 aims to index all of PubMed and expand to all the medical sciences, says chief executive Oren Etzioni.

“The one I still use the most is Google Scholar,” says Jose Manuel Gómez-Pérez, who works on semantic searching for the software company Expert System in Madrid. “But there is a lot of potential here.”


Semantic Scholar is not the only AI-based search engine around, however. Computing giant Microsoft quietly released its own AI scholarly search tool, Microsoft Academic, to the public this May, replacing its predecessor, Microsoft Academic Search, which the company stopped adding to in 2012.

Microsoft’s academic search algorithms and data are available for researchers through an application programming interface (API) and the Open Academic Society, a partnership between Microsoft Research, AI2 and others. “The more people working on this the better,” says Kuansan Wang, who is in charge of Microsoft's effort. He says that Semantic Scholar is going deeper into natural-language processing—that is, understanding the meaning of full sentences in papers and queries—but that Microsoft’s tool, which is powered by the semantic search capabilities of the firm's web-search engine Bing, covers more ground, with 160 million publications.

Like Semantic Scholar, Microsoft Academic provides useful (if less extensive) filters, including by author, journal or field of study. And it compiles a leaderboard of most-influential scientists in each subdiscipline. These are the people with the most ‘important’ publications in the field, judged by a recursive algorithm (freely available) that judges papers as important if they are cited by other important papers. The top neuroscientist for the past six months, according to Microsoft Academic, is Clifford Jack of the Mayo Clinic, in Rochester, Minnesota.

Other scholars say that they are impressed by Microsoft’s effort. The search engine is getting close to combining the advantages of Google Scholar’s massive scope with the more-structured results of subscription bibliometric databases such as Scopus and the Web of Science, says Anne-Wil Harzing, who studies science metrics at Middlesex University, UK, and has analysed the new product. “The Microsoft Academic phoenix is undeniably growing wings,” she says. Microsoft Research says it is working on a personalizable version—where users can sign in so that Microsoft can bring applicable new papers to their attention or notify them of citations to their own work—by early next year.

Other companies and academic institutions are also developing AI-driven software to delve more deeply into content found online. The Max Planck Institute for Informatics, based in Saarbrücken, Germany, for example, is developing an engine called DeepLife specifically for the health and life sciences. “These are research prototypes rather than sustainable long-term efforts,” says Etzioni.

In the long term, AI2 aims to create a system that will answer science questions, propose new experimental designs or throw up useful hypotheses. “In 20 years’ time, AI will be able to read—and more importantly, understand—scientific text,” Etzioni says.

Author : Nicola Jones

Source : https://www.scientificamerican.com/article/new-ai-based-search-engines-are-a-ldquo-game-changer-rdquo-for-science-research/

Have you ever been attacked by trolls on social media? I have. In December a mocking tweet from white supremacist David Duke led his supporters to turn my Twitter account into an unholy sewer of Nazi ravings and disturbing personal abuse. It went on for days.

We’re losing the Internet war with the trolls. Faced with a torrent of hate and abuse, people are giving up on social media, and websites are removing comment features. Who wants to be part of an online community ruled by creeps and crazies?

Fortunately, this pessimism may be premature. A new strategy promises to tame the trolls and reinvigorate civil discussion on the Internet. Hatched by Jigsaw, an in-house think tank at Google’s parent company, Alphabet (GOOGL, +1.96%), the tool relies on artificial intelligence and could solve the once-impossible task of vetting floods of online comments.

To explain what Jigsaw is up against, chief research scientist Lucas Dixon compares the troll problem to so-called denial-of-service attacks in which attackers flood a website with garbage traffic in order to knock it off-line.

“Instead of flooding your website with traffic, it’s flooding the comment section or your social media or hashtag so that no one else can have a word, and basically control the conversation,” says Dixon.

Such surges of toxic comments are a threat not only to individuals, but also to media companies and retailers—many of whose business models revolve around online communities. As part of its research on trolls, Jigsaw is beginning to quantify the damage they do. In the case of Wikipedia, for instance, Jigsaw can measure the correlation between a personal attack on a Wikipedia editor and the subsequent frequency the editor will contribute to the site in the future.

The solution to today’s derailed online discourse lies in reams of data and deep learning, a fast-evolving subset of artificial intelligence that mimics the neural networks of the brain. Deep learning gave rise to recent and remarkable breakthroughs in Google’s translation tools.

In the case of comments, Jigsaw is using millions of comments from the New York Times and Wikipedia to train machines to recognize traits like aggression and irrelevancy. The implication: A site like the Times, which has the resources to moderate only about 10% of its articles for comments, could soon deploy algorithms to expand those efforts 10-fold.

While the tone and vocabulary on one media outlet comment section may be radically different from another’s, Jigsaw says it will be able to adapt its tools for use across a wide variety of websites. In practice, this means a small blog or online retailer will be able to turn on comments without fear of turning a site into a vortex of trolls.

Technophiles seem keen on what Jigsaw is doing. A recent Wired feature dubbed the unit the “Internet Justice League” and praised its range of do-gooder projects.

But some experts say that the Jigsaw team may be underestimating the challenge.

Recent high-profile machine learning projects focused on identifying images and translating text. But Internet conversations are highly contextual: While it might seem obvious, for example, to train a machine learning program to purge the word “bitch” from any online comment, the same algorithm might also flag posts in which people are using the term more innocuously—as in, “Life’s a bitch,” or “I hate to bitch about my job, but …” Teaching a computer to reliably catch the slur won’t be easy.

“Machine learning can understand style but not context or emotion behind a written statement, especially something as short as a tweet. This is stuff it takes a human a lifetime to learn,” says David Auerbach, a former Google software engineer. He adds that the Jigsaw initiative will lead to better moderation tools for sites like the New York Times but will fall short when it comes to more freewheeling forums like Twitter and Reddit.

Such skepticism doesn’t faze Jigsaw’s Dixon. He points out that, like denial-of-service attacks, trolls are a problem that will never be solved but their effect can be mitigated. Using the recent leaps in machine learning technology, Jigsaw will tame the trolls enough to let civility regain the upper hand, Dixon believes.

Jigsaw researchers also point out that gangs of trolls—the sort that pop up and spew vile comments en masse—are often a single individual or organization deploying bots to imitate a mob. And Jigsaw’s tools are rapidly growing adept at identifying and stifling such tactics.

Dixon also has an answer to the argument that taming trolls won’t work because the trolls will simply adapt their insults whenever a moderating tool catches on to them.

“The more we introduce tools, the more creative the attacks will be,” Dixon says. “The dream is the attacks at some level get so creative no one understands them anymore and they stop being attacks.” 



Increasingly, popular media sites and blogs, from NPR to Reuters, are eliminating comments from their pages.

Ex-Kleiner VC Introduces Diversity Initiative
Ellen Pao Photograph by David Paul Morris—Bloomberg via Getty Images 

July 2015 
Ellen Pao, interim CEO of Reddit, resigns in the wake of what she calls “one of the largest trolling attacks in history.”

Late Night with Seth Meyers - Season 4
Actress Leslie Jones Photograph by Lloyd Bishop—NBC/NBCU Photo Bank via Getty Images 

July 2016
Movie actress Leslie Jones quits Twitter after trolls send a barrage of racist and sexual images. In one of her final tweets, she writes, “You won’t believe the evil.”


A version of this article appears in the February 1, 2017 issue of Fortune with the headline "Troll Hunters."

Author : Jeff John Roberts

Source : http://fortune.com/2017/01/23/jigsaw-google-internet-trolls/

Page 6 of 6


World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.