This list offers some tips and tools to help you get the most out of your Internet searches.

Semantic Search Tools and Databases

Semantic search tools depend on replicating the way the human brain thinks and categorizes information to ensure more relevant searches. Give some of these semantic tools and databases a try.

  • Zotero. Firefox users will like this add-on that helps you organize your research material by collecting, managing, and citing any references from Internet research.
  • Freebase. This community-powered database includes information on millions of topics.
  • Powerset. Enter a topic, phrase, or question to find information from Wikipedia with this semantic application.
  • Kartoo. Enter any keyword to receive a visual map of the topics that pertain to your keyword. Hover your mouse over each to get a thumbnail of the website.
  • DBpedia. Another Wikipedia resource, ask complex questions with this semantic program to get results from within Wikipedia.
  • Quintura. Entering your search term will create a cloud of related terms as well as a list of links. Hover over one of the words or phrases in the cloud to get an entirely different list of links.
  • [true knowledge]. Help with current beta testing at this search engine or try their Quiz Bot that finds answers to your questions.
  • Stumpedia. This search engine relies on its users to index, organize, and review information coming from the Internet.
  • Evri. This search engine provides you with highly relevant results from articles, papers, blogs, images, audio, and video on the Internet.
  • Gnod. When you search for books, music, movies and people on this search engine, it remembers your interests and focuses the search results in that direction.
  • Boxxet. Search for what interests you and you will get results from the "best of" news, blogs, videos, photos, and more. Type in your keyword and in addition to the latest news on the topic, you will also receive search results, online collections, and more.

Meta-Search Engines

Meta-search engines use the resources of many different search engines to gather the most results possible. Many of these will also eliminate duplicates and classify results to enhance your search experience.

  • SurfWax. This search engine works very well for reaching deep into the web for information.
  • Academic Index. Created by the former chair of Texas Association of School Librarians, this meta-search engine only pulls from databases and resources that are approved by librarians and educators.
  • Infomine has been built by a pool of libraries in the United States.
  • Clusty. Clusty searches through top search engines, then clusters the results so that information that may have been hidden deep in the search results is now readily available.
  • Dogpile. Dogpile searches rely on several top search engines for the results then removes duplicates and strives to present only relevant results.
  • Turbo 10. This meta-search engine is specifically designed to search the deep web for information.
  • Multiple Search. Save yourself the work by using this search engine that looks among major search engines, social networks, flickr, Wikipedia, and many more sites.
  • Mamma. Click on the Power Search option to customize your search experience with this meta-search engine.
  • World Curry Guide. This meta-search tool with a strong European influence has been around since 1997 and is still growing strong.
  • Fazzle.com. Give this meta-search engine a try. It accesses a large number of databases and claims to have more access to information than Google.
  • Icerocket. Search blogs as well as the general Internet, MySpace, the news, and more to receive results by posting date.
  • iZito. Get results from a variety of major search engines that come to you clustered in groups. You can also receive only US website results or receive results with a more international perspective.
  • Ujiko. This unusual meta-search tool allows for you to customize your searches by eliminating results or tagging some as favorites.
  • IncyWincy is an Invisible Web search engine and it behaves as a meta-search engine by tapping into other search engines and filtering the results. It searches the web, directory, forms, and images. With a free registration, you can track search results with alerts.

General Search Engines and Databases

These databases and search engines for databases will provide information from places on the Internet most typical search engines cannot.

  • DeepDyve. One of the newest search engines specifically targeted at exploring the deep web, this one is available after you sign up for a free membership.
  • OAIster. Search for digital items with this tool that provides 12 million resources from over 800 repositories.
  • direct search. Search through all the direct search databases or select a specific one with this tool.
  • CloserLook Search. Search for information on health, drugs and medicine, city guides, company profiles, and Canadian airfares with this customized search engine that specializes in the deep web.
  • Northern Light Search. Find information with the quick search or browse through other search tools here.
  • Yahoo! Search Subscriptions. Use this tool to combine a search on Yahoo! with searches in journals where you have subscriptions such as Wall Street Journal and New England Journal of Medicine.
  • Librarians’ Internet Index (LII) is a publicly-funded website and weekly newsletter serving California, the nation, and the world.
  • The Scout Archives. This database is the culmination of nine years’ worth of compiling the best of the Internet.
  • Daylife. Find news with this site that offers some of the best global news stories along with photos, articles, quotes, and more.
  • Silobreaker. This tool shows how news and people in the news impacts the global culture with current news stories, corresponding maps, graphs of trends, networks of related people or topics, fact sheets, and more.
  • spock. Find anyone on the web who might not normally show up on the surface web through blogs, pictures, social networks, and websites here.
  • The WWW Virtual Library. One of the oldest databases of information available on the web, this site allows you to search by keyword or category.
  • pipl. Specifically designed for searching the deep web for people, this search engine claims to be the most powerful for finding someone.
  • Complete Planet is a free and well designed directory resource makes it easy to access the mass of dynamic databases that are cloaked from a general purpose search.
  • Infoplease is an information portal with a host of features. Using the site, you can tap into a good number of encyclopedias, almanacs, an atlas, and biographies. Infoplease also has a few nice offshoots like Factmonster.com for kids and Biosearch, a search engine just for biographies.

Academic Search Engines and Databases

The world of academia has many databases not accessible by Google and Yahoo!, so give these databases and search engines a try if you need scholarly information.

  • Google Scholar. Find information among academic journals with this tool.
  • WorldCat. Use this tool to find items in libraries including books, CDs, DVDs, and articles.
  • getCITED. This database of academic journal articles and book chapters also includes a discussion forum.
  • Microsoft Libra. If you are searching for computer science academic research, then Libra will help you find what you need.
  • BASE – Bielefeld Academic Search Engine. This multi-disciplinary search engine focuses on academic research and is available in German, Polish, and Spanish as well as English.
  • yovisto. This search engine is an academic video search tool that provides lectures and more.
  • AJOL – African Journals Online. Search academic research published in AJOL with this search engine.
  • HighWire Press. From Stanford, use this tool to access thousands of peer-reviewed journals and full-text articles.
  • MetaPress. This tool claims to be the "world’s largest scholarly content host" and provides results from journals, books, reference material, and more.
  • OpenJ-Gate. Access over 4500 open journals with this tool that allows you to restrict your search to peer-reviewed journals or professional and industry journals.
  • Directory of Open Access Journals. This journal search tool provides access to over 3700 top "quality controlled" journals.
  • Intute. The resources here are all hand-selected and specifically for education and research purposes.
  • Virtual Learning Resource Center. This tool provides links to thousands of academic research sites to help students at any level find the best information for their Internet research projects.
  • Gateway to 21st Century Skills. This resource for educators is sponsored by the US Department of Education and provides information from a variety of places on the Internet.
  • MagBot. This search engine provides journal and magazine articles on topics relevant to students and their teachers.
  • http://web.mel.org/index.php?P=SPT--BrowseResourcesFeaturedResources&ParentId=915" style="color: rgb(0, 82, 163); text-decoration: none;">Michigan eLibrary. Find full-text articles as well as specialized databases available for searching.

Scientific Search Engines and Databases

The scientific community keeps many databases that can provide a huge amount of information but may not show up in searches through an ordinary search engine. Check these out to see if you can find what you need to know.

  • Science.gov. This search engine offers specific categories including agriculture and food, biology and nature, Earth and ocean sciences, health and medicine, and more.
  • WorldWideScience.org. Search for science information with this connection to international science databases and portals.
  • CiteSeer.IST. This search engine and digital library will help you find information within scientific literature.
  • Scirus has a pure scientific focus. It is a far reaching research engine that can scour journals, scientists’ homepages, courseware, pre-print server material, patents and institutional intranets.
  • Scopus. Find academic information among science, technology, medicine, and social science categories.
  • GoPubMed. Search for biomedical texts with this search engine that accesses PubMed articles.
  • the Gene Ontology. Search the Gene Ontology database for genes, proteins, or Gene Ontology terms.
  • PubFocus. This search engine searches Medline and PubMed for information on articles, authors, and publishing trends.
  • Scitation. Find over one million scientific papers from journals, conferences, magazines, and other sources with this tool.

Custom Search Engines

Custom search engines narrow your focus and eliminate quite a bit of the extra information usually contained in search results. Use these resources to find custom search engines or use the specific custom search engines listed below.

  • CustomSearchEngine.com. This listing includes many of the Google custom search engines created.
  • CustomSearchGuide.com. Find custom search engines here or create your own.
  • CSE Links. Use this site to find Google Coop custom search engines.
  • PGIS PPGIS Custom Search. This search engine is customized for those interested in the "practice and science" of PGIS/PPGIS.
  • Files Tube. Search for files in file sharing and uploading sites with this search engine.
  • Rollyo. "Roll your own search engine" at this site where you determine which sites will be included in your searches.

Collaborative Information and Databases

One of the oldest forms of information dissemination is word-of-mouth, and the Internet is no different. With the popularity of bookmarking and other collaborative sites, obscure blogs and websites can gain plenty of attention. Follow these sites to see what others are reading.

  • Del.icio.us. As readers find interesting articles or blog posts, they can tag, save, and share them so that others can enjoy the content as well.
  • Digg. As people read blogs or websites, they can "digg" the ones they like, thus creating a network of user-selected sites on the Internet.
  • Technorati. Not only is this site a blog search engine, but it is also a place for members to vote and share, thus increasing the visibility for blogs.
  • StumbleUpon. As you read information on the Internet, you can Stumble it and give it a thumbs up or down. The more you Stumble, the more closely aligned to your taste will the content become.
  • Reddit. Working similarly to StumbleUpon, Reddit asks you to vote on articles, then customizes content based on your preferences.
  • Twine. With Twine you can search for information as well as share with others and get recommendations from Twine.
  • Kreeo.com. This collaborative site offers shared knowledge from its members through forums, blogs, and shared websites.

Hints and Strategies

Searching the deep web should be done a bit differently, so use these strategies to help you get started on your deep web searching.

  • Don’t rely on old ways of searching. Become aware that approximately 99% of content on the Internet doesn’t show up on typical search engines, so think about other ways of searching.
  • Search for databases. Using any search engine, enter your keyword alongside "database" to find any searchable databases (for example, "running database" or "woodworking database").
  • Get a library card. Many public libraries offer access to research databases for users with an active library card.
  • Stay informed. Reading blogs or other updated guides about Internet searches on a regular basis will ensure you are staying updated with the latest information on Internet searches.
  • Search government databases. There are many government databases available that have plenty of information you may be seeking.
  • Bookmark your databases. Once you find helpful databases, don’t forget to bookmark them so you can always come back to them again.
  • Practice. Just like with other types of research, the more you practice searching the deep web, the better you will become at it.
  • Don’t give up. Researchers agree that most of the information hidden in the deep web is some of the best quality information available.

Helpful Articles and Resources for Deep Searching

Take advice from the experts and read these articles, blogs, and other resources that can help you understand the deep web.

  • Deep Web – Wikipedia. Get the basics about the deep web as well as links to some helpful resources with this article.
  • Deep Web – AI3:::Adaptive Information. This assortment of articles from the co-coiner of the phrase "deep web," Michael Bergman offers a look at the current state of deep web perspectives.
  • The Invisible Web. This article from About.com provides a very simple explanation of the deep web and offers suggestions for tackling it.
  • ResourceShelf. Librarians and researchers come together to share their findings on fun, helpful, and sometimes unusual ways to gather information from the web.
  • Docuticker. This blog offers the latest publications from government agencies, NGOs, think tanks, and other similar organizations. Many of these posts are links to databases and research statistics that may not appear so easily on typical web searches.
  • TechDeepWeb.com. This site offers tips and tools for IT professionals to find the best deep web resources.
  • Digital Image Resources on the Deep Web. This article includes links to many digital image resources that probably won’t show up on typical search engine results.
  • Timeline of events related to the Deep Web. This timeline puts the entire history of the deep web into perspective as well as offers up some helpful links.
  • The Deep Web. Learn terminology, get tips, and think about the future of the deep web with this article.
  • How to Evaluate Web Resources is a guide by WhoIsHostingThis.com to help students quickly evaluate the credibility of any resource they find on the internet.
Categorized in Deep Web

Islamic terrorists are arming themselves with the technical tools and expertise to attack the online systems underpinning Western companies and critical infrastructure, according to a new study from the Institute for Critical Infrastructure Technology.

The goal of the report was to bring awareness to "a hyper-evolving threat" said James Scott, ICIT co-founder and senior fellow.

Dark web marketplaces and forums make malware and tech expertise widely available and — with plenty of hackers for hire and malware for sale — technical skills are no longer required. A large-scale attack could be just around the corner, said Scott.

"These guys have the money to go on hacker-for-hire forums and just start hiring hackers," he said.

U.S. authorities are well-aware of the rising threat posed by Islamic terrorists armed with advanced cybertools. In April, Defense Secretary Ashton Carter declared a cyberwar against the Islamic State group, or ISIS. Ransomware chatter rose to prominence on dark web jihadi forums around the fall of 2015 and continues to be a topic of debate, particularly among members of ISIS and Boko Haram.

"I had the same position that I have right now with this in December of last year with regards to ransomware hitting the health-care sector," said Scott. "We were seeing the same exact thing."

Much of the chatter on jihadi chat boards comes from Europeans and Americans, often social outcasts living vicariously through the online reputation of their handle — including disenfranchised teens or jailhouse Muslim converts turned radicals, Scott said. They may not have strong coding skills, but they have access to Western institutions and businesses and are looking to leverage that access to serve ISIS.

An example of the sort of conversation that takes place on Islamic dark web forums involved a cleaner in Berlin who worked the overnight shift and wanted to know how they could help, said Scott. Others chimed in, explaining how the janitor could load malware onto a USB device and plug it into a computer to allow them to remotely hack into the network.

"That is the kind of insider threat that we are going to be facing," said Scott. "That is what they are seeing as the next step — an army of insider threats in the West."

"These guys have the money to go on hacker-for-hire forums and just start hiring hackers"
-James Scott, ICIT co-founder and senior fellow.

Though not known for being particularly sophisticated in their use of technology — beyond the use of encrypted messaging services and creating malicious apps — Islamic terrorists are now aggressively seeking ways to bridge gaps in their knowledge, said Scott. This may come in the form of hiring hackers, recruiting tech-savvy teens and educating new recruits.

"They are rapidly compensating for that slower part of their evolution," said Scott.

For example, ISIS operates what can best be described as a 24-hour cyber help desk, staffed by tech-savvy recruits around the globe. There are always about six operatives available to address questions, for example, about how to send encrypted messages, and strategize about how to leverage local access into cyberattacks. They also share tutorials, cybersecurity manuals and YouTube links, and try to recruit other techies, said Scott.

"It is obvious that cyber jihadists use dark web forums for everything — from discussing useful exploits and attack vectors, to gaining anonymity tips and learning the basics of hacking from the ISIS cyber help desk," he said. "Setting up properly layered attacks is incredibly easy even if one has a modest budget. All one needs is a target and a reason."
ICIT will present its findings and identify possible solutions for protecting critical infrastructure — along with a panel of industry experts and government officials — on June 29 in Washington.

Source:  http://www.cnbc.com/2016/06/15/the-cyber-jihad-is-coming-says-this-security-firm.html

Categorized in Internet Privacy

In 1989, Tim Berners-Lee, English computer scientist and the creator of the World Wide Web, couldn't have predicted that people would be using his idea to spread the word about the Arab Spring uprisings, or raise thousands of dollars to create a product. His goal was simple: he wanted a way to help people find and keep track of information more easily.

Nearly 27 years later, the World Wide Web has grown beyond the single server that Berners-Lee created to become a much larger and more influential entity. But there's one thing that continues to worry Berners-Lee--that some organizations are trying to limit people's ability to access certain types of content on the internet.

"It's been great, but spying, blocking sites, re-purposing people's content, taking you to the wrong websites--that completely undermines the spirit of helping people create," Berners-Lee tells the New York Times.

That's why this week, Berners-Lee and other powerful individuals in tech are hosting an event called the Decentralized Web Summit to discuss ways to give individuals more privacy, and more control over what they can access on the web. They want to find a way to stop governments from blocking certain web pages for example, and find more ways for people to pay for things on the internet without handing over sensitive credit card information.

Berners-Lee also told the Times that he's concerned about how the rising dominance of tech giants, such as Amazon, Google, and Twitter, is discouraging competition among companies that deal with the web, and stemming a more diverse flow of ideas.

"The problem is the dominance of one search engine, one big social network, one Twitter for microblogging," he says. "We don't have a technology problem, we have a social problem."

Berners-Lee and others sketched out their ideas for a few technological solutions that they believe could help decentralize the web. They think it would be beneficial for more websites to adopt a ledger-like style of payment, such as Bitcoin, to give people more control over their money.

Another one of the Decentralized Web Summit's organizers, Edward Kahle, has also created an Internet Archive, which can store discontinued websites and multiple versions of a web page. Those are small steps, but it's a move back in the direction of Berners-Lee's original version of the World Wide Web: a place where anyone can find the information they need--anytime, anywhere.

Source:  http://www.inc.com/anna-hensel/tim-berners-lee-decentralized-web-summit.html

Categorized in Online Research

Sometimes, a little bit of knowledge can be a very dangerous thing.Everyone’s guilty of it. Have you ever had a harmless little headache? Then you’ve found yourself with smartphone in hand, searching your symptoms on Google, running down an endless online checklist?

The next thing you know, you’re absolutely petrified you have a brain tumour.Sound familiar? It’s more common than you think.

By giving us instant health information (ranging from medically sound to commercially manipulative to completely crackpot) without the knowledge or context to decipher it, Google has turned us into a generation of raving hypochondriacs, or ‘Cyberchondriacs’.

A Ten Eyewitness News online poll showed that more than 50 percent of people admitted to having taken panicked trips to the doctor after talking themselves into thinking they could be on death’s door.

“They have near convinced me I’m dying,” poll responder Ana Hamed said of Google symptom searches, while Michael Bielaczek said, “I had a cough, I Googled it, turned out I had full blown AIDS.”

Amy Bastian responded, “I’m a nurse in a GP surgery, and the amount of people who Google their symptoms is bloody ridiculous! Sure, if you want to go from having a sore toe to being clinically dead in two clicks, go for it, but it would really just be easier to come see your GP to start with.”Instead, many people start with a Google search, or an online symptom checker when they feel ill.

In Australia, half of patients aged 25-44 access health information online, while nearly one in three use the Internet to search specific problems addressed at a GP visit, according to a 2013 study by the Australian General Practice Statistics and Classification Centre (AGPSCC.)

But just how accurate is that health information?

If you’re using an online symptom-checker, the answer might shock you.

A study conducted last year by researchers at Harvard Medical tested 23 of the most popular online symptom checkers, feeding them a range of symptoms from 45 patient case studies.Distressingly, the correct diagnosis was displayed first in only 34 percent of evaluations.

Likewise, the correct diagnosis was displayed amongst the top 20 possible diagnoses only 58 percent of the time.

Your chances of getting proper medical advice online is worse than winning at two-up. Online symptom checkers are a minefield for misdiagnoses.So how do we navigate the confusion? Luckily, there are a few guidelines to follow to avoid those late-night panic attacks.

Dr Magdalen Campbell from the Sydney North Health Network says it’s all about increasing your health literacy, and using the Internet as a tool together with your GP.

“We realize patients often Google their symptoms,” she said, “ but since using the Internet as a diagnostic tool is not always the best way to do things, if we're going to recommend using the Internet, we would do it as part of the consultation.”Be cautious with the information you find online. Here are some tips, tricks and things to remember:

Don’t Google late at night

If it’s something that can wait, sleep on it. Things tend to look brighter in the morning.

“I usually say don't do it in the middle of the night because you're usually tired and anxious and worried by that stage,” Dr Campbell said.

After a proper nights’ sleep, any search results you come across are bound to be less exaggerated by your own fears.

Even doctors have their own GPs

We’re all human, and the advice to resist Internet-based and self-diagnosis goes for everyone, even medical professionals.

“We will tend to, as human beings, disaster-think,” Dr Campbell said. “When we actually get any symptoms, we tend to look at the worst possible scenario and often come out with that. So it's better to actually go to the GP with any information and concerns, then as a partner with the GP, figure out what the symptoms are and what they really mean.”

If you are going to use the Internet, use reputable sources

Anyone can publish anything on the Internet, so take your search results with a grain of salt. And no, you can’t trust Wikipedia.

A recent study showed Wikipedia is the sixth most popular website for accessing medical information online, but nine out of ten articles on some of the most common medical conditions (coronary artery disease, lung cancer, depression, osteoarthritis, hypertension, diabetes and back pain) did not contain the most up-to-date research and health information.

“Dr Google goes world-wide, s o some of the information isn't even actually relevant in Australia,” Dr Campbell adds. “Dr Google goes to every single website.

“We say look, start with the very reputable ones. Everything from any of the government sites, the National Prescribing Service (NPS), and then move out from there. Primary Health Networks (PHNs) have links and widgets to various different health information sites.

“And for goodness sake, tell me what you've got from the Internet – I can tell you myself from knowledge whether it’s reputable, or else I can actually do a search on the secure medical websites, where research is being done.”

Know that online symptom searches can cause ‘cyberchondria’

Remember that online symptom checkers show you every possible diagnosis from a cold virus to leukaemia. And they’re only right about one-third of the time. Keep calm, and take your concerns to your doctor.

“They'll look down the list and see something they recognize, or that they are concerned or worried about, and then try to fit their symptoms into what that disease is. So we prefer to actually diagnose something prior to them [looking online],” Dr Campbell said.

The moral? If you use the proper approach, you won’t become a victim of cyberchondria.

The Internet can be a fantastic resource for medical information, but only if used wisely, and in its proper context.

Search in the light of day, use a government or doctor recommended resource, and most importantly, your search results are no substitute for your GP.

Keep calm, and Google responsibly.

Source:  http://tenplay.com.au/news/national/june/has-google-created-a-nation-of-cyberchondriacs

Categorized in Search Engine

When I think about the behavior of many business people today, I imagine a breadline. These employees are the data-poor, waiting around at the end of the day on the data breadline. The overtaxed data analyst team prioritizes work for the company executives, and everyone else must be served later. An employee might have a hundred different questions about his job. How satisfied are my customers? How efficient is our sales process? How is my marketing campaign faring?

These data breadlines cause three problems present in most teams and businesses today. First, employees must wait quite a while to receive the data they need to decide how to move forward, slowing the progress of the company. Second, these protracted wait times abrade the patience of teams and encourage teams to decide without data. Third, data breadlines inhibit the data team from achieving its full potential.

Once an employee has been patient enough to reach the front of the data breadline, he gets to ask the data analyst team to help him answer his question. Companies maintain thousands of databases, each with hundreds of tables and billions of individual data points. In addition to producing data, the already overloaded data teams must translate the panoply of figures into something more digestible for the rest of the company, because with data, nuances matter.

The conversation bears more than a passing resemblance to one between a third-grade student and a librarian. Even expert data analysts lose their bearings sometimes, which results in slow response times and inaccurate responses to queries. Both serve to erode the company’s confidence in their data.

Overly delayed by the strapped data team and unable to access the data they need from the data supply chain, enterprising individual teams create their own rogue databases. These shadow data analysts pull data from all over the company and surreptitiously stuff it into database servers under their desks. The problem with the segmented data assembly line is that errors can be introduced at any single step.

A file could be truncated when the operations team passes the data to the analyst team. The data analyst team might use an old definition of customer lifetime value. And an overly ambitious product manager might alter the data just slightly to make it look a bit more positive than it actually is. With this kind of siloed pipeline, there is no way to track how errors happen, when they happen or who committed them. In fact, the error may never be noticed. 

Data fragmentation has another insidious consequence. It incites data brawls, where people shout, yell and labor over figures that just don’t seem to align and that point to diametrically different conclusions.

Imagine two well-meaning teams, a sales team and a marketing team, both planning next year’s budget. They share an objective: to exceed the company’s bookings plan. Each team independently develops a plan, using metrics like customer lifetime value, cost of customer acquisition, payback period, sales cycle length and average contract value.

When there’s no consistency in the data among teams, no one can trust each other’s point of view. So meetings like this devolve into brawls, with people arguing about data accuracy, the definition of shared metrics and the underlying sources of their two conflicting conclusions. 

Imagine a world where data is put into the hands of the people who need it, when they need it, not just for Uber drivers, but for every team in every company. This is data democratization, the beautiful vision of supplying employees with self-service access to the insights they need to maximize their effectiveness. This is the world of the most innovative companies today: technology companies like Uber, Google, Facebook and many others who have re-architected their data supply chains to empower their people to move quickly and intelligently. 

Source:  http://techcrunch.com/2016/06/12/data-breadlines-and-data-brawls/

Categorized in Online Research

One of the most ambitious endeavors in quantum physics right now is to build a large-scale quantum network that could one day span the entire globe. In a new study, physicists have shown that describing quantum networks in a new way—as mathematical graphs—can help increase the distance that quantum information can be transmitted. Compared to classical networks, quantum networks have potential advantages such as better security and being faster under certain circumstances. 

"A worldwide quantum network may appear quite similar to the internet—a huge number of devices connected in a way that allows the exchange of information between any of them," coauthor Michael Epping, a physicist at the University of Waterloo in Canada, told Phys.org. "But the crucial difference is that the laws of quantum theory will be dominant for the description of that information.

For example, the state of the fundamental information carrier can be a superposition of the basis states 0 and 1. By now, several advantages in comparison to classical information are known, such as prime number factorization and secret communication. However, the biggest benefit of quantum networks might well be discovered by future research in the rapidly developing field of quantum information theory."

Quantum networks involve sending entangled particles across long distances, which is challenging because particle loss and decoherence tend to scale exponentially with the distance.

In their study published in the New Journal of Physics, Epping and coauthors Hermann Kampermann and Dagmar Bruß at the Heinrich Heine University of Düsseldorf in Germany have shown that describing physical quantum networks as abstract mathematical graphs offers a way to optimize the architecture of quantum networks and achieve entanglement across the longest possible distances.

"A network is a physical system," Epping explained. "Examples of a network are the internet and labs at different buildings connected by optical fibers. These networks may be described by mathematical graphs at an abstract level, where the network structure—which consists of nodes that exchange quantum information via links—is represented graphically by vertices connected by edges. An important task for quantum networks is to distribute entangled states amongst the nodes, which are used as a resource for various information protocols afterwards. In our approach, the graph description of the network, which might come to your mind quite naturally, is related to the distributed quantum state."

In the language of graphs, this distributed quantum state becomes a quantum graph state. The main advantage of the graph state description is that it allows researchers to compare different quantum networks that produce the same quantum state, and to see which network is better at distributing entanglement across large distances.

Quantum networks differ mainly in how they use quantum repeaters—devices that offer a way to distribute entanglement across large distances by subdividing the long-distance transmission channels into shorter channels.

Here, the researchers produced an entangled graph state for a quantum network by initially defining vertices with both nodes and quantum repeaters. Then they described how measurements at the repeater stations modify this graph state. Due to these modifications, the vertices associated with quantum repeaters are removed so that only the network nodes serve as vertices in the final quantum state, while the connecting quantum repeater lines become edges.

In the final graph state, the weights of the edges correspond to the number of quantum repeaters and how far apart they are. Consequently, by changing the weights of the edges, the new approach can optimize a given performance metric, such as security or speed. In other words, the method can determine the best way to use quantum repeaters to achieve long-distance entanglement for large-scale quantum networks.

In the future, the researchers plan to investigate the demands for practical implementation. They also want to extend these results to a newer research field called "quantum network coding" by generalizing the quantum repeater concept to quantum routers, which can make quantum networks more secure against macroscopic errors. 

Source:  http://phys.org/news/2016-06-worldwide-quantum-web-graphs.html


Categorized in Online Research

Ladies and gentlemen, it’s time to go negative. (And, no, I’m not referring to this year’s presidential race). I’m talking about negative keywords: those words and phrases that are essential to ensuring your pay-per-click (PPC) ads are displayed to the right audience. 

Going negative: How to eliminate junk PPC queries

Here’s what I mean: You run a small business selling hand-blown glassware; you’ve just launched a new line of wine glasses. You bid on “glasses” as a search term.

Your searcher then Googles a keyword phrase that includes “glasses.” Your ad pops up; the searcher clicks on the ad. Great news, right? Think again: If that searcher is looking for the nearest “glasses repair shop,” for eye glasses, not wine glasses, you’ve just paid money for someone to accidentally click on your ad who has no intention of ever being a customer.

While Google is pretty smart when responding to search queries and integrating user intent into the results, its system isn't perfect. PPC success is predicated on the “Golden Rule of Paid Search”: Give users what they are looking for. As the SEO team at Ranked One has succinctly pointed out, “Paid search is a pull and not a push marketing initiative. Thus, it is vital that we only present searchers with that which is most relevant to their query.”

Here’s another example from the Ranked One team. Say you want to target searchers looking for “pet-friendly hotels in Albuquerque.” Following standard PPC best practices, you create a PPC ad that includes the search phrase in question (“pet-friendly hotels in Albuquerque”) and a landing page that echoes this message.

But that’s not enough. You also need to eliminate so-called “junk queries.” In this example, you would then remove queries from searchers who have no intention of booking a pet-friendly hotel room -- someone searching for hotel jobs in Albuquerque, for example.

True, a few erroneous clicks won’t sink your PPC budget. But, over time, the lack of a strong negative keyword list means your ads will be shown to the wrong target audience.

How to use negative keywords

Campaign level v. ad group level. There are two ways you can address negative keywords: Add them at the campaign level or the ad-group level. When you add a negative keyword at the campaign level, this tells Google to never show your ad for this keyword. Use this approach for keywords that will never be associated with your product, like “hotel jobs” for your pet-friendly hotel or “eyeglass repair” for your wine glasses. 

When you add negative keywords at the ad group level, you tell Google not to show ads at this particular ad-group level. Ad-group level negative keywords can be used to gain greater control over your AdWords campaigns.

Traditional vs. protective use.

All of our examples thus far have featured the traditional use of negative keywords -- eliminating extraneous queries that are irrelevant to your product or service. Protective use is a bit different. In a nutshell, you’re restricting the use of a highly specific keyword phrase from general ads, even if this phrase is relevant.


Kissmetrics offers a great example for PPC shoe ads. In its example, you sell red Puma suede sneakers and create a PPC ad with copy targeted at this particular type of shoe (Ad #1). You also have another, broader catch-all ad for general shoe sales (Ad #2).

In this example, you want to be sure that only people searching for “red Puma suede sneakers” see Ad #1. You don’t want any broad matches for “Puma” or “red sneakers” or “suede sneakers.” So, you add those phrases to your negative keyword list for Ad #1. This ad will then be displayed only to searchers with an exact match for “Red Puma suede sneakers,” effectively beating out all the broad match advertisers.

Building your negative list. When you’re selling a product or service, it’s easy to get stuck in the mindset of what you’re offering. You may be surprised by how ambiguous some of your search terms can be! Not sure how to get started building your negative list? Check out this handy keyword list from Tech Wise that includes a broad range of the most common negative keywords for eliminating erroneous queries, ranging from employment to research.

Next, dive into your queries. Ranked One recommends crawling through search query reports by pulling the SQR right in the Google interface. What phrases pop up again and again that are irrelevant to your product or service? What queries don’t match the user intent you’re targeting? Start your research there.

 Bottom line. Bidding on the best keywords is only half the battle. Negative keywords are just as important for an effective PPC strategy. When used correctly, negative keywords can help you save the budget for the best quality searches.

Source:  https://www.entrepreneur.com/article/276961 

Categorized in Online Research

Security researchers have found that some of the wealthiest and most developed nations are at the greatest risk of hacks and cyberattacks -- in part because they have more unsecured systems connected to the internet.

Security firm Rapid7 said in its latest research, published Monday, that many Western nations are putting competitiveness and business ahead of security, and that will have "dire consequences" for some of the world's largest economies, the report said.

The researchers pointed to a correlation between a nation's gross domestic product (GDP) and its internet "presence," with the exposure of insecure, plaintext services, which almost anyone can easily intercept.

Some of the most exposed countries on the internet today include Australia (ranked fourth), China (ranked fifth), France (13th), the US (14th), Russia (19th) and the UK (23rd).

Belgium led the rankings as the most exposed country on the internet, with almost one-third of all systems and devices exposed to the internet.

"Every service we searched for, it came back in the millions," said Tod Beardsley senior security research manager at Rapid7, who co-authored the report and spoke on the phone last week.

"Everything came back from two million to 20 million systems," he said.


As for the biggest culprits, there were over 11 million systems with direct access to relational databases, about 4.7 million networked systems that were categorized as the most commonly attacked port, and 4.5 million apparent printer services

But there was one that floated above them all -- a networking relic from the Cold War era.

Dissecting the example, Beardsley said the ongoing widespread use of a decades-old, outdated and unsecured networking protocol would prove his point. He said, citing the research, that scans showed that there are over 14 million devices still using outdated, insecure, plaintext Telnet for remotely accessing files and servers.

Beardsley said it was "encouraging" to see Secure Shell (SSH), its modern replacement, prevail over Telnet -- not least because given the choice, it's far easier to use -- which makes the switch much easier.

But he said it was frustrating to see millions nevertheless leave their systems wide open to hackers and nation-state attackers.

He echoed similar sentiments from the report, saying that the high exposure rates are a "failure" of modern internet engineering.

"Despite calls from... virtually every security company and security advocacy organization on Earth, compulsory encryption is not a default, standard feature in internet protocol design. Cleartext protocols 'just work,' and security concerns are doggedly secondary," said the paper.

Beardsley said that the research is a good starting point to see if there are other factors that determine if GDP influences the exposure rate, but they stressed that more work needed to be done and the research was just a foundation stone for further work.

"There are a million questions I have -- I could talk for an hour," he said.

Source:  http://www.zdnet.com/article/researchers-say-theyve-found-the-most-exposed-countries-on-the-internet/

Categorized in Online Research

A long, long time ago I was talking to Mike Grehan about search engine rankings. He used the term “the rich get richer”, to explain why sites that live at the top of Google are likely to stay there.

One of the reasons is the ease of findability.

A writer who is researching a subject on Google is highly likely to click the top result first. If that web page answers the right questions then it may be enough to earn a citation in an article, and that link will help fortify the search position.

The rich get richer.

I mention this because yesterday I read a brilliant, in-depth post from Glen Allsopp (aka @Viperchill), which illustrates that the rich do indeed get richer, at least in terms of search positions.

In this case, the rich are major publishing groups.

The way they are getting richer is by cross-linking to existing and new websites, from footers and body copy, which are “constantly changing”.

There’s nothing hideously wrong with this approach, but it’s a bit of risk to tweak the links quite so often. Especially when the anchor text is something other than the site’s brand name.

As Glen says:

“As anyone who has been involved in search engine optimisation for a period of time might wonder, surely getting so many sitewide links in a short timeframe should raise a bit of a red flag?”
It’s interesting to see that Google not only tolerates it, but actively rewards this kind of behaviour, at least in the examples highlighted in Glen’s post.

The short story is that Hearst was found to be linking to a newly launched site, BestProducts, from its portfolio of authority websites, which includes the likes of Cosmopolitan, Elle, Marie Claire and Bazaar.

This helped to put the new site on the map in a rather dramatic way.

Party hard in footerland

Here are a couple of screenshots. The first is from March, when the anchor text was ‘Style Reviews’

cosmomarch 2

The second appeared later, with the link text changing to ‘Beauty Reviews’. Note that the link placement changed too.

cosmopolitan 1

I’m going to assume that these links are dofollow, which is a potentially risky tactic, and one that has attracted the dreaded manual penalty for some site owners.

Furthermore, this is clearly something that has been done with intent. Design, not accident.

Glen says:

“It’s now obvious that the people working for Woman’s Day, Marie Claire, Popular Mechanics and Esquire had some conversion that went along the lines of, ‘Don’t forget, today’s the day we have to put those links to Best Products in the footer.’”
But did it work?

The results

Glen estimates that BestProducts attracted at least 600,000 referrals from Google (organic) in April 2016, so yep, it has worked incredibly well.

Here are some of the positions that the site has bagged in little over half a year, from a standing start:

Screen Shot 2016-06-07 at 14.51.32

Pretty amazing, right? Some pretty big, broad terms there.

Glen reckons that the following 16 companies – and the brands they own – dominate Google results.


I suspect that if you look at other industries, such as car hire, where a few brands own hundreds of sub-brands, that you’ll see similar tactics and results.

We are family?

The standout question for me isn’t whether Hearst and its peers are systematically outsmarting Google with a straightforward sitewide link strategy, nor whether that strategy will hold up. It is more about whether Google truly understands related entities.

Does it know that these sites are linked to one another by having the same parent company? And does that discount the link tactics in play here?

Certainly one of the webspam team would be able to spot that one site was related to another, were it flagged for a manual action. So is Google turning a blind eye?

Here’s what Matt Cutts said about related sites, back in 2014:

“If you have 50 different sites, I wouldn’t link to all 50 sites down in the footer of your website, because that can start to look pretty spammy to users. Instead you might just link to no more than three or four or five down in the footer, that sort of thing, or have a link to a global page, and the global page can talk about all the different versions and country versions of your website.”

“If you’ve got stuff that is all on one area, like .com, and you’ve got 50 or 100 different websites, that is something where I’d be really a lot more careful about linking them together.”

“And that’s the sort of thing where I wouldn’t be surprised if we don’t want to necessarily treat the links between those different websites exactly the same as we would treat them as editorial votes from some other website.”

Note that Matt talks about links to other sites, as opposed to “links with descriptive and ever-changing anchor text”. Somewhat different.

Screw hub pages, launch hub sites

Internal linking works best when there is a clear strategy in place. That normally means figuring out a taxonomy and common vocabulary in advance. It also means understanding the paths you want to create for visitors, to help pull them towards other pages, or in this case, other sites. These should mirror key business goals.

With all that in mind, I think it’s pretty smart, I really do, but let’s see how it plays out. And obviously it takes a rich portfolio of authority websites to play this hand, so yeah… the rich get richer.

Assuming this strategy works out in the long run we can expect to see lots more niche sites being launched by the big publishing groups, underpinned by this kind of cross-site linking.

Ok, so this fluid footer linking approach certainly sails a bit close to the wind and we may not have heard the last of this story, but it once again proves the absolute power of links in putting a site on the map. Take any statements about links not being mattering so much in 2016 with a large bucket of salt.

Source:  https://searchenginewatch.com/2016/06/07/are-related-sitewide-footer-links-the-key-to-dominating-google/

Categorized in Search Engine

For those with a head for details and a knack for ferreting out facts, a career as a professional researcher can prove to be both satisfying and lucrative. Research as a career provides you with some flexibility: you can choose to strike out on your own as an independent researcher or to work for a company that needs your expertise. Both have some basic requirements you’ll need to meet to make sure you are up to the task.

Step 1

Pick a type of research that interests you and that you have the skills and experience to handle. You can choose from many different areas such as science, genealogy, advertising or marketing.

Step 2

Take college courses designed to help you learn how to become an effective Certified Researcher. Include classes relevant to the subject you want to research. A bachelor’s degree may be enough to get you started as a researcher in many fields, but if you want to do any kind of technical research you’ll most likely find that you need a master’s degree or a Ph.D. Market research will also require that you study psychology, consumer behavior and survey methods. If you already have a degree you may just need to add a few research classes to make yourself eligible for many jobs.

Step 3

Obtain certification from a recognized professional organization, such as the Marketing Research Institute, the Association of Internet Research Specialists or the Association of Professional Genealogists. While this step isn’t always required, certification can help to boost your credibility and increase your chances of getting hired.

Step 4

Contact companies that need researchers, and apply for a job with any of them whose needs match your skills. Often you must apply for a job that is conducted in person, but in some cases you may be able to find a job working from home. Check online job boards as well as postings made through professional organizations, colleges and marketing companies.

[Uploaded by the Association Member: Bridget Miller]

Categorized in Online Research
Page 2 of 4


World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.