David J. Redcliff

David J. Redcliff

Thursday, 04 January 2018 01:05

How to Protect Your Home Router from Attacks

A comprehensive guide for choosing and setting up secure Wi-Fi.

Your router, that box sitting in a corner of your house giving you internet access, is in many ways more important than your laptop or mobile phone. It might not store any of your personal information directly, but sensitive data passes through it every time you access various online services and can be stolen or manipulated if the router is hacked.

A compromised router can also serve as a platform for attacking other devices on your local networks, such as your phone or laptop, or for launching denial-of-service attacks against internet websites. This can get your IP address blacklisted and can slow down your internet speed.

Because it's exposed directly to the outside world, your router is frequently targeted by automated scans, probes, and exploits, even if you don't see those attacks. And compared to your laptop or phone, your router doesn't have an antivirus program or other security software to protect it.

Unfortunately, most routers are black boxes and users have little control over their software and configurations, especially when it comes to devices supplied by internet service providers to their customers. That said, there are certain actions that users can take to considerably decrease the likelihood of their routers falling victim to automated attacks.

Many of those actions are quite basic, but others require a bit of technical knowledge and some understanding of networking concepts. For less technical users, it might simply be easier to buy a security-focused router with automatic updates such as the EeroGoogle OnHubNorton CoreBitdefender Box, or F-Secure Sense. The downside is that those routers are expensive, some require annual subscriptions for certain services, and their level of customization is very limited. Ultimately, their users need to trust the vendors to do the right thing.

If you don’t want to get one of those, or already have a router, follow along for a detailed, step-by-step guide on how to secure it.

Choosing a router

If you prefer getting a cheaper router or modem that you can tweak your needs, avoid getting one from your ISP. Those devices are typically manufactured in bulk by companies in China and elsewhere and they come with customized firmware that the ISPs might not fully control. This means that security issues can take a very long time to fix and in some cases, they never get patched.

Some ISPs force users to use gateway devices they supply because they come pre-configured for remote assistance and there have been many cases when those remote management features have been poorly implemented, leaving devices open to hacking. Furthermore, users cannot disable remote access because they're often not given full administrative control over such devices.

Whether users can be forced to use a particular modem or router by their ISP varies from country to country. In the US, regulations by the Federal Communications Commission (FCC) are supposed to prevent this, but it can still happen. There are also more subtle device lock-ins where ISPs allow users to install their own devices, but certain services like VoIP will not work without an ISP-supplied device.

If your internet provider doesn't allow you to bring your own device onto its network, at least ask if their device can be configured in bridge mode and if you can install your own router behind it. Bridge mode disables routing functionality in favor of your own device. Also, ask if your ISP's device is remotely managed and if you can opt out and disable that service.

The market for home and small office routers is very diverse so choosing the right router will depend on budget, the space that needs to be covered by its wireless signal, the type of internet connection you have, and other desired features like USB ports for attached storage, etc. However, once you get your list down to a few candidates, it's important to choose a device from a manufacturer that takes security seriously.

Research the company’s security track record: How did it handle vulnerabilities being discovered in its products in the past? How quickly did it release patches? Does it have a dedicated contact for handling security reports? Does it have a vulnerability disclosure policy or does it run a bug bounty program? Use Google to search for terms like “[vendor name] router vulnerability” or “[vendor name] router exploit” and read past reports from security researchers about how they interacted with those companies. Look at the disclosure timelines in those reports to see how fast the companies developed and released patches after being notified of a vulnerability.

It's also important to determine, if possible, how long a device will continue to receive firmware updates after you buy it. With product life cycles becoming shorter and shorter across the industry, you might end up buying a product released two years ago that will reach end-of-support in one year or in several months. And that's not something you want with a router.

Unfortunately, router vendors rarely publish this information on their websites, so obtaining it might involve calling or emailing the company’s support department in your respective country, as there are region-specific device models or hardware revisions with different support periods. You can also look at the firmware update history of the router you intend to buy or of a router from the manufacturer’s same line of products, to get an idea of what update frequency you can expect from the company.

Choose a device that can also run open-source community-maintained firmware like OpenWrt/LEDE because it's always good to have options and these third-party projects excel at providing support for older devices that manufacturers no longer update. You can check the device support list of such firmware projects—OpenWrtLEDEDD-WRTAdvancedTomatoAsuswrt-Merlin—to inform your buying decision.

Once you have a router, it's time to make a few important settings. Start by reading the manual to find out how to connect to the device and access its administration interface. This is usually done from a computer through a web browser.

Change the default admin password

Never leave your router with the default administrator password as this is one of the most common reasons for compromises. Attackers use botnets to scan the entire internet for exposed routers and try to authenticate with publicly known default credentials or with weak and easy-to-guess passwords. Choose a strong password and, if given the option, also change the username to the default administrative account.

Last year, a botnet called Mirai enslaved over 250,000 routers, IP cameras, and other Internet-of-Things devices by connecting to them over Telnet and SSH with default or weak administrative credentials. The botnet was then used to launch some of the largest DDoS attacks ever recorded. More recently, a Mirai clone infected over 100,000 DSL models in Argentina and other countries.

Secure the administrative interface

Many routers allow users to expose the admin interface to the internet for remote administration and some older devices even have it configured this way by default. This is a very bad idea even if the admin password is changed because many of the vulnerabilities found in routers are located in their web-based management interfaces.

If you need remote administration for your router, read up on how to set up a virtual private network (VPN) server to securely connect into your local network from the internet and then perform management tasks through that connection. Your router might even have the option to act as a VPN server, but unless you understand how to configure VPNs, turning on that feature might be risky and could expose your network to additional attacks.

It's also a common misconception that if a router's administrative interface is not exposed to the internet, the device is safe. For a number of years now, attackers have been launching attacks against routers through cross-site request forgery (CSRF) techniques. Those attacks hijack users' browsers when visiting malicious or compromised websites and force them to send unauthorized requests to routers through local network connections.

In 2015, a researcher known as Kafeine detected a large-scale CSRF attack launched through malicious advertisements placed on legitimate websites. The attack code was capable of targeting over 40 different router models from various manufacturers and attempted to change their Domain Name System (DNS) settings through command injection exploits or through default administrative credentials.

By replacing the DNS servers configured on routers with rogue servers under their control, attackers can direct users to fake versions of the websites they are trying to visit. This is a powerful attack because there's no indication in the browser address bar that something is amiss unless the website uses the secure HTTPS protocol. Even then, attackers can use techniques such as TLS/SSL stripping and many users might not notice that the green padlock is missing. In 2014, DNS hijacking attacks through compromised home routers were used to phish online banking credentials from users in Poland and Brazil.

CSRF attacks usually try to locate routers over the local area network at common IP addresses like or that manufacturers configure by default. However, users can change the local IP address of their routers to something else, for example, or even There's no technical reason why the router should have the first address in an IP netblock and this simple change can stop many automated CSRF attacks in their tracks.

There are some other techniques that attackers could combine with CSRF to discover the LAN IP address of a router, even when it’s not the default one. However, some routers allow restricting access to their administrative interfaces by IP address.

If this option is available, you can configure the allowed IP address to be different than those automatically assigned by the router to your devices via the Dynamic Host Configuration Protocol (DHCP). For example, configure your DHCP address pool to be from to, but specify as the IP address allowed to access the router's administrative interface.

This address will never be automatically assigned to a device, but you can manually configure your computer to temporarily use it whenever you need to make changes to your router's settings. After the changes are done, set your computer to automatically obtain an IP address via DHCP again.

Also, if possible, configure the router interface to use HTTPS and always access it from a private/incognito browser window, so that no authenticated session that could be abused via CSRF remains active in the browser. Don’t allow the browser to save the username and password either.

Shut down risky services

Services like Telnet and SSH (Secure Shell) that provide command-line access to devices should never be exposed to the internet and should also be disabled on the local network unless they're actually needed. In general, any service that’s not used should be disabled to reduce the attack surface.

Over the years, security researchers have found many undocumented "backdoor" accounts in routers that were accessible over Telnet or SSH and which provided full control over those devices. Since there's no way for a regular user to determine if such accounts exist in a router or not, disabling these services is the best course of action.

Another problematic service is Universal Plug and Play (UPnP), which allows devices to discover each other on networks and share their configurations so they can automatically set up services like data sharing and media streaming.

Many UPnP vulnerabilities have been found in home routers over the years, enabling attacks that ranged from sensitive information exposure to remote code execution leading to full compromise.

A router's UPnP service should never be exposed to the internet and, unless absolutely needed, it shouldn't be enabled on the local area network either. There's no simple way to tell if a router's UPnP implementation is vulnerable and the service can be used by other network devices to automatically punch holes through the router's firewall. That's how many IP cameras, baby monitors, and network-attached storage boxes become accessible on the internet without their owners knowing.

Other services that have been plagued by vulnerabilities and should be disabled include the Simple Network Management Protocol (SNMP), the Home Network Administration Protocol (HNAP) and the Customer Premises Equipment WAN Management Protocol (CWMP), also known as TR-069.


SNMP is mostly used in corporate environments, so many home routers don't have the feature, but some do, especially those supplied by ISPs. In 2014, researchers from Rapid7 found SNMP leaks in almost half a million internet-connected devices and in April, two researchers found a weakness in the SNMP implementation of 78 cable modem models from 19 manufacturers, including Cisco, Technicolor, Motorola, D-Link, and Thomson. That flaw could have allowed attackers to extract sensitive information such as administrative credentials and Wi-Fi passwords from devices and to modify their configurations.

HNAP is a proprietary administration protocol that's only found in devices from certain vendors. In 2010, a group of researchers found vulnerabilities in the HNAP implementation of some D-Link routers and in 2014 a worm called The Moon used information leaked through HNAP to target and infect Linksys routers by exploiting an authentication bypass vulnerability.

CWMP or TR-069 is a remote management protocol used by ISPs and flawed implementations have been exploited by Mirai last year to infect or to crash DSL modems from ISPs in Ireland, the U.K., and Germany. Unfortunately, there's usually no way for users to disable TR-069, which is another reason to avoid ISP-supplied devices.

One thing's certain: Attackers are increasingly attacking routers from inside local area networks, using infected computers or mobile devices as a launchpad. Over the past year researchers have found both Windows and Android malware programs in the wild that were designed specifically to hack into routers over local area networks. This is useful for attackers because infected laptops and phones will be connected to their owners to different networks, reaching routers that wouldn’t otherwise be exposed to attacks over the internet.

Security firm McAfee also found an online banking trojan dubbed Pinkslipbot that transforms infected computers into web proxy servers accessible from the internet by using UPnP to automatically request port forwarding from routers.

The Vault7 documents published by WikiLeaks this year describe a set of tools supposedly used by the US Central Intelligence Agency to hack into routers and replace their firmware with one designed to spy on traffic. The toolset includes an exploit named Tomato that can extract a router's administrative password through UPnP from inside the local area network, as well as custom firmware dubbed CherryBlossom that reportedly works on consumer and small business routers from 10 manufacturers.

Unfortunately, when building devices, many manufacturers don't include local area network attacks in their threat model and leave various administration and debugging ports exposed on the LAN interface. So it's often up to users to determine what services are running and to close them, where possible.

Users can scan their routers from inside their local networks to identify open ports and protocols using various tools, a popular one being Nmap with its graphical user interface called Zenmap. Scanning a router from outside the LAN is more problematic because port scanning on the internet might have legal implications depending on jurisdiction. It's not recommended to do this from your own computer, but you can use a third-party online service like ShieldsUP or Pentest-Tools.com to do it on your behalf.

Secure your Wi-Fi network

When setting up your Wi-Fi network, choose a long, hard-to-guess passphrase, also known as a Pre-shared Key (PSK)—consider a minimum of 12 alphanumeric characters and special symbols—and always use the WPA2 (Wi-Fi Protected Access II) security protocol. WPA and WEP are not safe and should never be used.

Disable Wi-Fi Protected Setup (WPS), a feature that allows connecting devices to the network by using a PIN printed on a sticker or by pushing a physical button on the router. Some vendors' WPS implementations are vulnerable to brute-force attacks and it's not easy to determine which ones.

Some routers offer the option to set up a guest wireless network that's isolated from the rest of your LAN and you can use it let friends and other visitors use your internet connection without sharing your main Wi-Fi password. Those guests might not have malicious intentions, but their devices might be infected with malware, so it's not a good idea to give them access to your whole network. Since their devices can also be used to attack the router is probably best not to let them use your internet connection at all, guest network or not, but that might not be an easy thing to explain to them.

Update your router's firmware

Very few routers have fully automatic update capabilities, but some do provide manual update checking mechanisms in their interfaces or email-based notifications for update availability. Unfortunately, these features might stop working over time as manufacturers make changes to their servers and URLs without taking old models into consideration. Therefore, it’s also good to periodically check the manufacturer's support website for updates.

Some more advanced stuff

If you disable UPnP but want a service that runs inside the LAN to be accessible from the internet—say an FTPS (FTP Secure) server running on your home computer—you will need to manually set up a port forwarding rule for it in the router's configuration. If you do this, you should strongly consider restricting which external IP addresses are allowed to connect to that service, as most routers allow defining an IP address range for port forwarding rules. Also, consider the risks of making those services available externally, especially if they don’t encrypt traffic.

If you don't use it for guests, the router's guest wireless network can be used to isolate internet-of-things devices on your LAN. Many IoT devices are managed through mobile apps via cloud-based services so they don't need to talk directly to your phone over the local network beyond initial setup.

Doing this protects your computers from the often vulnerable IoT devices and your IoT devices from your computers, in case they become infected. Of course, if you decide to use the guest wireless network for this purpose, change its password and stop sharing it with other people.

Similar network segmentation can be achieved through VLANs (virtual local area networks), but this feature is not commonly available in consumer routers unless those devices run third-party firmware like OpenWRT/LEDE, DD-WRT or AdvancedTomato. These community-built Linux-based operating systems for routers unlock advanced networking features and using them might actually improve security, because their developers tend to patch vulnerabilities quicker than router vendors.

However, flashing custom firmware on a router will typically void its warranty and, if not done properly, might leave the device in an unusable state. Don't attempt this unless you have the technical knowledge to do it and fully understand the risks involved.

Following the recommendations in this guide will significantly lower the chances of your router falling victim to automatic attacks and being enslaved in a botnet that launches the next internet-breaking DDoS attack. However, if a sophisticated hacker with advanced reverse-engineering skills decides to specifically target you, there’s very little you can do to prevent them from eventually breaking into your home router, regardless of what settings you made. But why make it easy for them, right?

 Source: This article was published motherboard.vice.com By Jacob Holcomb

Today, blockchain technology is still at an early stage of its development and will be used in new interesting projects in the future, according to cryptocurrency expert Bogdan Shelygin.

"It’s difficult to predict what will happen to Bitcoin in the future, but I can say with full confidence that Bitcoin is more than just super profits. It has introduced to the world a new technology which is as revolutionary as the Internet," Bogdan Shelygin, an analyst with DeCenter, Russia’s largest blockchain, and cryptocurrency-related community, told Sputnik.

Bitcoin, the world’s most popular cryptocurrency, has shown a meteoric rise in the outgoing year. Its value grew from below $1,000 in the beginning of the year and hit the historic milestone of $20,000 earlier in December. For some financial experts and economists, however, Bitcoin is a reason for concern as another possible bubble.

According to Shelygin, despite the fact that there are those predicting an imminent collapse of Bitcoin, it is impossible to say whether it is a bubble or not.

"Let’s get to the facts. Once the price of Bitcoin already fell, but it remains valuable for the global community as an alternative to the traditional financial system," the analyst pointed out, adding that the main feature of Bitcoin is its decentralized nature.

Shelygin also said that the phenomenon of Bitcoin is that it is the first cryptocurrency the global community has believed in for already 10 years.

"This means that the most interesting things are yet to come. A similar situation was with the Internet. Google was founded in 1998, but today the company is a pioneer in web and other technologies," he said.

Commenting further, Shelygin also suggested that even if Bitcoin collapses the entire cryptocurrencies market will not fall.

"Bitcoin is only the most popular example of the use of the blockchain technology, but it’s not the most outstanding one. Bitcoin and other cryptocurrencies will contribute to the future improvement of the blockchain. Today, the industry is still too young," the analyst said, adding that there a number of other interesting blockchain-based projects to watch in 2018, including Ethereum, Bitcoin Cash, and Ripple.

Source: This article was published sputniknews.com

Researchers are wielding the same strange properties that drive quantum computers to create hack-proof forms of data encryption.

Recent advances in quantum computers may soon give hackers access to machines powerful enough to crack even the toughest of standard internet security codes. With these codes broken, all of our online data -- from medical records to bank transactions -- could be vulnerable to attack.

To fight back against the future threat, researchers are wielding the same strange properties that drive quantum computers to create theoretically hack-proof forms of quantum data encryption.

And now, these quantum encryption techniques may be one step closer to wide-scale use thanks to a new system developed by scientists at Duke University, The Ohio State University and Oak Ridge National Laboratory. Their system is capable of creating and distributing encryption codes at megabit-per-second rates, which is five to 10 times faster than existing methods and on par with current internet speeds when running several systems in parallel.


The researchers demonstrate that the technique is secure from common attacks, even in the face of equipment flaws that could open up leaks.

“We are now likely to have a functioning quantum computer that might be able to start breaking the existing cryptographic codes in the near future,” said Daniel Gauthier, a professor of physics at The Ohio State University. “We really need to be thinking hard now of different techniques that we could use for trying to secure the internet.”

The results appear online Nov. 24 in Science Advances.

To a hacker, our online purchases, bank transactions and medical records all look like gibberish due to ciphers called encryption keys. Personal information sent over the web is first scrambled using one of these keys, and then unscrambled by the receiver using the same key. 

For this system to work, both parties must have access to the same key, and it must be kept secret. Quantum key distribution (QKD) takes advantage of one of the fundamental properties of quantum mechanics -- measuring tiny bits of matter like electrons or photons automatically changes their properties -- to exchange keys in a way that immediately alerts both parties to the existence of a security breach. 

Though QKD was first theorized in 1984 and implemented shortly thereafter, the technologies to support its wide-scale use are only now coming online. Companies in Europe now sell laser-based systems for QKD, and in a highly-publicized event last summer, China used a satellite to send a quantum key to two land-based stations located 1200 km apart.

The problem with many of these systems, said Nurul Taimur Islam, a graduate student in physics at Duke, is that they can only transmit keys at relatively low rates -- between tens to hundreds of kilobits per second -- which are too slow for most practical uses on the internet.

“At these rates, quantum-secure encryption systems cannot support some basic daily tasks, such as hosting an encrypted telephone call or video streaming,” Islam said.

Like many QKD systems, Islam’s key transmitter uses a weakened laser to encode information on individual photons of light. But they found a way to pack more information onto each photon, making their technique faster.

By adjusting the time at which the photon is released, and a property of the photon called the phase, their system can encode two bits of information per photon instead of one. This trick, paired with high-speed detectors developed by Clinton Cahall, graduate student in electrical and computer engineering, and Jungsang Kim, professor of electrical and computer engineering at Duke, powers their system to transmit keys five to 10 times faster than other methods.

“It was changing these additional properties of the photon that allowed us to almost double the secure key rate that we were able to obtain if we hadn’t done that,” said Gauthier, who began the work as a professor of physics at Duke before moving to OSU.


In a perfect world, QKD would be perfectly secure. Any attempt to hack a key exchange would leave errors on the transmission that could be easily spotted by the receiver. But real-world implementations of QKD require imperfect equipment, and these imperfections open up leaks that hackers can exploit.

The researchers carefully characterized the limitations of each piece of equipment they used. They then worked with Charles Lim, currently a professor of electrical and computer engineering at the National University of Singapore, to incorporate these experimental flaws into the theory.

“We wanted to identify every experimental flaw in the system, and include these flaws in the theory so that we could ensure our system is secure and there is no potential side-channel attack,” Islam said.

Though their transmitter requires some specialty parts, all of the components are currently available commercially. Encryption keys encoded in photons of light can be sent over existing optical fiber lines that burrow under cities, making it relatively straightforward to integrate their transmitter and receiver into the current internet infrastructure.

“All of this equipment, apart from the single-photon detectors, exist in the telecommunications industry, and with some engineering we could probably fit the entire transmitter and receiver in a box as big as a computer CPU,” Islam said.

This research was supported by the Office of Naval Research Multidisciplinary University Research Initiative program on Wavelength-Agile QKD in a AQ12 Marine Environment (N00014-13-1-0627) and the Defense Advanced Research Projects Agency Defense Sciences Office Information in a Photon program. Additional support was provided by Oak Ridge National Laboratory, operated by UT-Battelle for the U.S. Department of Energy under contract no. DE-AC05-00OR22725, and National University of Singapore startup grant R-263-000-C78-133/731.

CITATION:  "Provably Secure and High-Rate Quantum Key Distribution With Time-Bin Qudits," Nurul T. Islam, Charles Ci Wen Lim, Clinton Cahall, Jungsang Kim and Daniel J. Gauthier. Science Advances, Nov. 24, 2017. DOI: 10.1126/sciadv.1701491

Source: This article was published today.duke.edu By AKARA MANKE

Big data is an incredibly useful platform for any business, big or small. It allows brands to delve deeper into the information and insights that fuel their products, services and processes.

For example, you can use data collected on past product performance to make a more informed decision about a future launch or development cycle.

That said, big data as a whole isn’t exactly what you’d call accessible. For starters, you need to deploy the systems and processes to collect useful data.

Then, you need to have a team of data analysts and scientists to sort through it all and find actionable intel.

Finally, you need someone to take that practical data and put it to good use. A company executive just might not have a clear plan for, or understand the applications of, a niche data set.

This doesn’t mean applying big data is impossible. It just means it’s a potentially involved and time-consuming process.

Naturally, this can give organizations and decision makers enough doubt to avoid big data systems. The number of companies using predictive analytics to drive processes and make decisions, for instance, remains low at 29 percent, according to a 2016 PwC press release.

Adoption for these technologies and systems is rising, but not at the rate it could be.

So if you’re reluctant to get involved with big data, you’re not alone. Luckily, there are tools and resources to help you manage the transition.

1. Marketing ROI: Kissmetrics

Big data isn’t just about the data itself. This means that even if you collect customer or visitor data and use it to your advantage, you’re not necessarily using big data fully.

In today’s highly digital landscape, collecting and analyzing data is par for the course. Tracking the number of visitors or traffic referred to your website isn’t necessarily “big data.” An alarming 61 percent of employees say their company is not using big data solutions despite collecting data regularly.

This all relates to marketing, because you’re constantly reviewing data to inform your decisions and actions. The performance of a new product launch will tell you whether or not the resources that went into it were worthwhile.

If it fails to catch on, you know not to waste more resources on similar products or services. But again, this is just surface information — not true insights.

Kissmetrics is a data-powered tool that can help you boost your marketing ROI and processes. It does more than just track information like pageviews, heatmaps, demographics and more.

It actually churns that data and spits out usable intel. You can use the platform to create triggers that resonate with your audience and further boost customer engagement and behaviors.

2. Sales Calling: PhoneBurner

Big data has become a key player in the evolution of modern sales and marketing departments. Few examples illustrate this point better than the ways in which big data has been incorporated into modern sales calling software.

PhoneBurner is a power dialing company that automates the sales calling process so that, when a potential customer answers the phone, they are connected to a live sales agent (with no annoying pause in between).

If the call goes to voicemail, the power dialers leaves your pre-recorded voicemail and integrates the call status directly into a software dashboard so you can easily follow up via email at a later date.

3. Third-Party Integration: InsightSquared

One key obstacle in big data is fragmentation. There are so many tools, platforms, third-party portals and information streams that combining and parsing everything can be incredibly daunting.

InsightSquared is a data-driven tool that solves this with solid integration with third-party platforms and services. It can connect to popular enterprise solutions you’re likely already familiar with, such as Google Analytics, ZenDesk, QuickBooks, Salesforce and more.

The information is then mined and analyzed, making it more accessible to you, your teams and even parties who don’t work with data regularly. If you connect a customer relationship tool or CRM, it syncs up the data to offer efficient lead generation, customer tracking, pipeline forecasting and even profitability predictions.

4. Machine Learning and Predictive Analytics: IBM’s Watson Analytics

Big data solutions can offer some pretty amazing insights into your business, customers and strategies. Machine learning and predictive analytics are the way to achieve this.

IBM’s Watson Analytics relies on the IBM Watson machine learning API to deliver remarkable analysis of your data. More importantly, it automates the entire process intelligently to leave you more time to focus elsewhere.

The best feature of Watson is that it unifies all data analysis projects into a single channel or source. It can be connected to marketing and sales tools, finance and human resources, customer data and performance and much more.

Watson also employs a unique AI system to deliver “natural language” insights, which is a fancy way of saying the data it returns is easy to understand.

5. Credit and Payment Analytics: TranzLogic

Love it or hate it, credit card transactions and related payment systems can deliver boatloads of invaluable and necessary data. Even so, associated data streams are not always accessible — especially to smaller businesses or teams — and they can be super complex and confusing.

TranzLogic is designed to process this information and extract actionable intel. Want to measure sales performance and customer patterns or improve promotions? What about using payment data to improve loyalty programs and boost engagement across your customer base?

Also, it’s a turnkey tool that doesn’t require additional knowledge or experience. Even if your specialty lies beyond IT or data, you can still make sense of everything reported through TranzLogic.

6. Customer Feedback: Qualtrics

Research can be invaluable to any business or brand. In fact, by conducting studies, surveys and simple questionnaires, you can extract highly useful insights about your audience and products. Why do you think polls and surveys are so popular on social media networks? People love to share their opinions. This is even a great way to discover and hear about new ideas or concepts.

Qualtrics is a customer feedback solution for related big data sources. With the tool, you gain access to three different types of real-time insights: market, customer and employee trends.

This includes things like customer satisfaction, exit interviews and market research. You can even unlock academic research and mobile studies, too. There’s a lot of data here you could put to use.

7. Long-Term Data: Google Analytics

Google Analytics probably needs no introduction. What’s special about Google’s toolset is that it can help you extract long-term information and stats. For instance, you get to see where traffic is coming from and how that fluctuates over time.

With information like this, you can fill in any gaps. You could, for instance, use this to your advantage during the holidays to target common referrals through marketing and promotions.

Social media traffic is another great source of data, which is also tied into Google’s platform.

It’s more of a robust, multi-platform toolset that can be used to track the kind of information you’d want to track anyway.

More importantly, it’s instantly accessible through your browser.

Source: This article was published bigdata-madesimple.com By Kayla Matthews

Much of the workings of the world today are controlled and powered by information, giving credence to that famous quote, “information is power”. Professionals, researchers, organizations, businesses, industries and even governments cannot function without information serving as “fuel” for decision-making, strategizing, gaining and storing knowledge.

But information is not something that is handed to anyone on a silver platter. It starts with a small raw fact or figure – or a set of raw facts and figures – that are not organized and, all too often, without meaning or context. These are called “data”. By itself, and in its raw form, data may seem useless.

Data will cease to be useless once it undergoes processing, where it will be organized, structured and given context through interpretation and analysis. Processing gives it meaning, effectively turning it into information that will eventually be of great use to those who need it. Collectively, all information will make up bodies of knowledge that will, in turn, benefit various users of this knowledge.

Without data, there won’t be any information. Therefore, no matter how data may seem random and useless, it is actually considered to be the most important and basic unit of any information structure or body of knowledge.

To that end, various approaches, tools and methodologies aimed at gathering or collecting data have been formulated.


Whether it is business, marketing, humanities, physical sciences, social sciences, or other fields of study or discipline, data plays a very important role, serving as their respective starting points. That is why, in all of these processes that involve the usage of information and knowledge, one of the very first steps is data collection.

Data collection is described as the “process of gathering and measuring information on variables of interest, in an established systematic fashion that enables one to answer queries, stated research questions, test hypotheses, and evaluate outcomes.”

Depending on the discipline or field, the nature of the information being sought, and the objective or goal of users, the methods of data collection will vary. The approach to applying the methods may also vary, customized to suit the purpose and prevailing circumstances, without compromising the integrity, accuracy and reliability of the data.

There are two main types of data that users find themselves working with – and having to collect.

  1. Quantitative Data. These are data that deal with quantities, values or numbers, making them measurable. Thus, they are usually expressed in numerical form, such as length, size, amount, price, and even duration. The use of statistics to generate and subsequently analyze this type of data add credence or credibility to it, so that quantitative data is overall seen as more reliable and objective.
  2. Qualitative Data. These data, on the other hand, deals with quality, so that they are descriptive rather than numerical in nature. Unlike quantitative data, they are generally not measurable, and are only gained mostly through observation. Narratives often make use of adjectives and other descriptive words to refer to data on appearance, color, texture, and other qualities.

In most cases, these two data types are used as preferences in choosing the method or tool to be used in data collection. As a matter of fact, data collection methods are classified into two, and they are based on these types of data. Thus, we can safely say that there are two major classifications or categories of data collection methods: the quantitative data collection methods and the qualitative data collection methods.


From the definition of “data collection” alone, it is already apparent why gathering data is important: to come up with answers, which come in the form of useful information, converted from data.

But for many, that still does not mean much.

Depending on the perspective of the user and the purpose of the information, there are many concrete benefits that can be gained from data gathering. In general terms, here are some of the reasons why data collection is very important. The first question that we will address is: “why should you collect data?”

Data collection aids in the search for answers and resolutions.

Learning and building knowledge is a natural inclination for human beings. Even at a very young age, we are in search for answers to a lot of things. Take a look at toddlers and small children, and they are the ones with so many questions, their curious spirit driving them to repeatedly ask whatever piques their interest.

A toddler curious about a white flower in the backyard will start collecting data. He will approach the flower in question and look at it closely, taking in the color, the soft feel of the petals against his skin, and even the mild scent that emanates from it. He will then run to his mother and pull her along until they got to where the flower is. In baby speak, he will ask what the flower’s name is, and the mother will reply, “It’s a flower, and it is called rose.”

It’s white. It’s soft. It smells good. And now the little boy even has a name for it. It’s called a rose. When his mother wasn’t looking, he reached for the rose by its stem and tried to pluck it. Suddenly, he felt a prickle in his fingers, followed by a sharp pain that made him yelp. When he looked down at his palm, he saw two puncture marks, and they are bleeding.

The little boy starts to cry, thinking how roses, no matter how pretty and good-smelling, are dangerous and can hurt you. This information will now be embedded in his mind, sure to become one of the most enduring pieces of information or tidbit of knowledge that he will know about the flower called “rose”.

The same goes in case of a marketing research, for example. A company wants to learn a few things about the market in order to come up with a marketing plan, or tweak an already existing marketing program. There’s no way that they will be able to do these things without collecting the relevant data.

Data collection facilitates and improves decision-making processes, and the quality of the decisions made.

Leaders cannot make decisive strategies without facts to support them. Planners cannot draw up plans and designs without a basis. Entrepreneurs could not possibly come up with a business idea – much less a viable business plan – out of nothing at all. Similarly, businesses won’t be able to formulate marketing plans, and implement strategies to increase profitability and growth, if they have no data to start from.

Without data, there won’t be anything to convert into useful information that will provide the basis for decisions. All that decision-makers are left with is their intuition and gut feeling, but even gut feeling and instinct have some basis on facts.

Decision-making processes become smoother, and decisions are definitely better, if there is data driving them. According to a survey by Helical IT, the success rate of decisions based on data gathered is higher by 79% than those made using pure intuition alone.

In business, one of the most important decisions that must be made is on resource allocation and usage. If they collect the relevant data, they will be able to make informed decisions on how to use business resources efficiently.

Data collection improves quality of expected results or output.

Just as having data will improve decision-making and the quality of the decisions, it will also improve the quality of the results or output expected from any endeavor or activity. For example, a manufacturer will be able to produce high quality products after designing them using reliable data gathered. Consumers will also find the claims of the company about the product to be more reliable because they know it has been developed after conducting significant amount of research.

Through collecting data, monitoring and tracking progress will also be facilitated. This gives a lot of room for flexibility, so response can be made accordingly and promptly. Adjustments can be made and improvements effected.

Now we move to the next question, and that is on the manner of collecting data. Why is there a need to be particular about how data is collected? Why does it have to be systematic, and not just done on the fly, using whatever makes the data gatherer comfortable? Why do you have to pick certain methodologies of data collection when you can simply be random with it?

  • Collecting data is expensive and resource-intensive. It will cost you money, time, and other resources. Thus, you have to make sure you make the most of it. You cannot afford to be random and haphazard about how you gather data when there are large amounts of investment at stake.
  • Data collection methods will help ensure the accuracy and integrity of data collected. It’s common sense, really. Using the right data collection method – and using it properly – will allow only high quality data to be gathered. In this context, high quality data refers to data that is free from errors and bias arising from subjectivity, thereby increasing their reliability. High quality and reliable data will then be processed, resulting to high quality information.


We’ll now take a look at the different methods or tools used to collect data, and some of their pros (+) and cons (-). You may notice some methods falling under both categories, which means that they can be used in gathering both types of data.

I. Qualitative Data Collection Methods

Exploratory in nature, these methods are mainly concerned at gaining insights and understanding on underlying reasons and motivations, so they tend to dig deeper. Since they cannot be quantified, measurability becomes an issue. This lack of measurability leads to the preference for methods or tools that are largely unstructured or, in some cases, maybe structured but only to a very small, limited extent.

Generally, qualitative methods are time-consuming and expensive to conduct, and so researchers try to lower the costs incurred by decreasing the sample size or number of respondents.

Face-to-Face Personal Interviews

This is considered to be the most common data collection instrument for qualitative research, primarily because of its personal approach. The interviewer will collect data directly from the subject (the interviewee), on a one-on-one and face-to-face interaction. This is ideal for when data to be obtained must be highly personalized.

The interview may be informal and unstructured – conversational, even – as if taking place between two casual to close friends. The questions asked are mostly unplanned and spontaneous, with the interviewer letting the flow of the interview dictate the next questions to be asked.

However, if the interviewer still wants the data to be standardized to a certain extent for easier analysis, he could conduct a semi-structured interview where he asks the same series of open-ended questions to all the respondents. But if they let the subject choose her answer from a set of options, what just took place is a closed, structured and fixed-response interview.

  • (+) This allows the interviewer to probe further, by asking follow-up questions and getting more information in the process.
  • (+) The data will be highly personalized (particularly when using the informal approach).
  • (-) This method is subject to certain limitations, such as language barriers, cultural differences, and geographical distances.
  • (-) The person conducting the interview must have very good interviewing skills in order to elicit responses.

Qualitative Surveys

  • Paper surveys or questionnaires. Questionnaires often utilize a structure comprised of short questions and, in the case of qualitative questionnaires, they are usually open-ended, with the respondents asked to provide detailed answers, in their own words. It’s almost like answering essay questions.
    • (+) Since questionnaires are designed to collect standardized data, they are ideal for use in large populations or sample sizes of respondents.
    • (+) The high amount of detail provided will aid analysis of data.
    • (-) On the other hand, the large number of respondents (and data), combined with the high level and amount of detail provided in the answers, will make data analysis quite tedious and time-consuming.
  • Web-based questionnaires. This is basically a web-based or internet-based survey, involving a questionnaire uploaded to a site, where the respondents will log into and accomplish electronically. Instead of a paper and a pen, they will be using a computer screen and the mouse.
    • (+) Data collection is definitely quicker. This is often due to the questions being shorter, requiring less detail than in, say, a personal interview or a paper questionnaire.
    • (+) It is also uncomplicated, since the respondents can be invited to answer the questionnaire by simply sending them an email containing the URL of the site where the online questionnaire is available for answering.
    • (-) There is a limitation on the respondents, since the only ones to be able to answer are those who own a computer, have internet connection, and know their way around answering online surverys.
    • (-) The lesser amount of detail provided means the researcher may end up with mostly surface data, and no depth or meaning, especially when the data is processed.

Focus Groups

Focus groups method is basically an interview method, but done in a group discussion setting. When the object of the data is behaviors and attitudes, particularly in social situations, and resources for one-on-one interviews are limited, using the focus group approach is highly recommended. Ideally, the focus group should have at least 3 people and a moderator to around 10 to 13 people maximum, plus a moderator.

Depending on the data being sought, the members of the group should have something in common. For example, a researcher conducting a study on the recovery of married mothers from alcoholism will choose women who are (1) married, (2) have kids, and (3) recovering alcoholics. Other parameters such as the age, employment status, and income bracketdo not have to be similar across the members of the focus group.

The topic that data will be collected about will be presented to the group, and the moderator will open the floor for a debate.

  • (+) There may be a small group of respondents, but the setup or framework of data being delivered and shared makes it possible to come up with a wide variety of answers.
  • (+) The data collector may also get highly detailed and descriptive data by using a focus group.
  • (-) Much of the success of the discussion within the focus group lies in the hands of the moderator. He must be highly capable and experienced in controlling these types of interactions.

Documental Revision

This method involves the use of previously existing and reliable documents and other sources of information as a source of data to be used in a new research or investigation. This is likened to how the data collector will go to a library and go over the books and other references for information relevant to what he is currently researching on.

  • (+) The researcher will gain better understanding of the field or subject being looked into, thanks to the reliable and high quality documents used as data sources.
  • (+) Taking a look into other documents or researches as a source will provide a glimpse of the subject being looked into from different perspectives or points of view, allowing comparisons and contrasts to be made.
  • (-) Unfortunately, this relies heavily on the quality of the document that will be used, and the ability of the data collector to choose the right and reliable documents. If he chooses wrong, then the quality of the data he will collect later on will be compromised.


In this method, the researcher takes a participatory stance, immersing himself in the setting where his respondents are, and generally taking a look at everything, while taking down notes.

Aside from note-taking, other documentation methods may be used, such as video and audio recording, photography, and the use of tangible items such as artifacts, mementoes, and other tools.

  • (+) The participatory nature may lead to the researcher getting more reliable information.
  • (+) Data is more reliable and representative of what is actually happening, since they took place and were observed under normal circumstances.
  • (-) The participation may end up influencing the opinions and attitudes of the researcher, so he will end up having difficulty being objective and impartial as soon as the data he is looking for comes in.
  • (-) Validity may arise due to the risk that the researcher’s participation may have an impact on the naturalness of the setting. The observed may become reactive to the idea of being watched and observed. If he planned to observe recovering alcoholic mothers in their natural environment (e.g. at their homes with their kids), their presence may cause the subjects to react differently, knowing that they are being observed. This may lead to the results becoming impaired.

Longitudinal studies

This is a research or data collection method that is performed repeatedly, on the same data sources, over an extended period of time. It is an observational research method that could even cover a span of years and, in some cases, even decades. The goal is to find correlations through an empirical or observational study of subjects with a common trait or characteristic.

An example of this is the Terman Study of the Gifted conducted by Lewis Terman at Stanford University. The study aimed to gather data on the characteristics of gifted children – and how they grow and develop – over their lifetime. Terman started in 1921, and it extended over the lifespan of the subjects, more than 1,500 boys and girls aged 3 to 19 years old, and with IQs higher than 135. To this day, this study is the world’s “oldest and longest-running” longitudinal study.

  • (+) This is ideal when seeking data meant to establish a variable’s pattern over a period of time, particularly over an extended period of time.
  • (+) As a method to find correlations, it is effective in finding connections and relationships of cause and effect.
  • (-) The long period may become a setback, considering how the probability of the subjects at the beginning of the research will still be complete 10, 20, or 30 years down the road is very low.
  • (-) Over the extended period, attitudes and opinions of the subjects are likely to change, which can lead to the dilution of data, reducing their reliability in the process.

Case Studies

In this qualitative method, data is gathered by taking a close look and an in-depth analysis of a “case study” or “case studies” – the unit or units of research that may be an individual, a group of individuals, or an entire organization. This methodology’s versatility is demonstrated in how it can be used to analyze both simple and complex subjects.

However, the strength of a case study as a data collection method is attributed to how it utilizes other data collection methods, and captures more variables than when a single methodology is used. In analyzing the case study, the researcher may employ other methods such as interviewing, floating questionnaires, or conducting group discussions in order to gather data.

  • (+) It is flexible and versatile, analyzing both simple and complex units and occurrence, even over a long period of time.
  • (+) Case studies provide in-depth and detailed information, thanks to how it captures as many variables as it can.
  • (-) Reliability of the data may be put at risk when the case study or studies chosen are not representative of the sample or population.

II. Quantitative Data Collection Methods

Data can be readily quantified and generated into numerical form, which will then be converted and processed into useful information mathematically. The result is often in the form of statistics that is meaningful and, therefore, useful. Unlike qualitative methods, these quantitative techniques usually make use of larger sample sizes because its measurable nature makes that possible and easier.

Quantitative Surveys

Unlike the open-ended questions asked in qualitative questionnaires, quantitative paper surveys pose closed questions, with the answer options provided. The respondents will only have to choose their answer among the choices provided on the questionnaire.

  • (+) Similarly, these are ideal for use when surveying large numbers of respondents.
  • (+) The standardized nature of questionnaires enable researchers to make generalizations out of the results.
  • (-) This can be very limiting to the respondents, since it is possible that his actual answer to the question may not be in the list of options provided on the questionnaire.
  • (-) While data analysis is still possible, it will be restricted by the lack of details.


Personal one-on-one interviews may also be used for gathering quantitative data. In collecting quantitative data, the interview is more structured than when gathering qualitative data, comprised of a prepared set of standard questions.

These interviews can take the following forms:

  • Face-to-face interviews: Much like when conducting interviews to gather qualitative data, this can also yield quantitative data when standard questions are asked.
    • (+) The face-to-face setup allows the researcher to make clarifications on any answer given by the interviewee.
    • (-) This can be quite a challenge when dealing with a large sample size or group of interviewees. If the plan is to interview everyone, it is bound to take a lot of time, not to mention a significant amount of money.
  • Telephone and/or online, web-based interviews. Conducting interviews over the telephone is no longer a new concept. Rapidly rising to take the place of telephone interviews is the video interview via internet connection and web-based applications, such as Skype.
    • (+) The net for data collection may be cast wider, since there is no need to travel through distances to get the data. All it takes is to pick up the phone and dial a number, or connect to the internet and log on to Skype for a video call or video conference.
    • (-) Quality of the data may be questionable, especially in terms of impartiality. The net may be cast wide, but it will only be targeting a specific group of subjects: those with telephones and internet connections and are knowledgeable about using such technologies.
  • Computer-assisted interviews. This is called CAPI, or Computer-Assisted Personal Interviewing where, in a face-to-face interview, the data obtained from the interviewee will be entered directly into a database through the use of a computer.
    • (+) The direct input of data saves a lot of time and other resources in converting them into information later on, because the processing will take place immediately after the data has been obtained from the source and entered into the database.
    • (-) The use of computers, databases and related devices and technologies does not come cheap. It also requires a certain degree of being tech-savvy on the part of the data gatherer.

Quantitative Observation

This is straightforward enough. Data may be collected through systematic observation by, say, counting the number of users present and currently accessing services in a specific area, or the number of services being used within a designated vicinity.

When quantitative data is being sought, the approach is naturalistic observation, which mostly involves using the senses and keen observation skills to get data about the “what”, and not really about the “why” and “how”.

  • (+) It is a quite simple way of collecting data, and not as expensive as the other methods.
  • (-) The problem is that senses are not infallible. Unwittingly, the observer may have an unconscious grasp on his senses, and how they perceive situations and people around. Bias on the part of the observer is very possible.


Have you ever wondered where clinical trials fall? They are considered to be a form of experiment, and are quantitative in nature. These methods involve manipulation of an independent variable, while maintaining varying degrees of control over other variables, most likely the dependent ones. Usually, this is employed to obtain data that will be used later on for analysis of relationships and correlations.

Quantitative researches often make use of experiments to gather data, and the types of experiments are:

  • Laboratory experiments. This is your typical scientific experiment setup, taking place within a confined, closed and controlled environment (the laboratory), with the data collector being able to have strict control over all the variables. This level of control also implies that he can fully and deliberately manipulate the independent variable.
  • Field experiments. This takes place in a natural environment, “on field” where, although the data collector may not be in full control of the variables, he is still able to do so up to a certain extent. Manipulation is still possible, although not as deliberate as in a laboratory setting.
  • Natural experiments. This time, the data collector has no control over the independent variable whatsoever, which means it cannot be manipulated. Therefore, what can only be done is to gather data by letting the independent variable occur naturally, and observe its effects.

You can probably name several other data collection methods, but the ones discussed are the most commonly used approaches. At the end of the day, the choice of a collection method is only 50% of the whole process. The correct usage of these methods will also have a bearing on the quality and integrity of the data being sought.

Source: This article was published cleverism.com By Anastasia

In the early years of the 20th Century, US carmakers had it good. As quickly as they could manufacture cars, people bought them.

By 1914, that was changing. In higher price brackets especially, purchasers and dealerships were becoming choosier. One commentator warned that the retailer "could no longer sell what his own judgement dictated". Instead, "he must sell what the consumer wanted".

That commentator was Charles Coolidge Parlin, widely recognised as the world's first professional market researcher and, indeed, the man who invented the very idea of market research.

A century later, the market research profession is huge: in the United States alone, it employs about 500,000 people.


Programme image for 50 Things That Made the Modern Economy

50 Things That Made the Modern Economy highlights the inventions, ideas and innovations that helped create the economic world.

It is broadcast on the BBC World Service. You can find more information about the programme's sources and listen online or subscribe to the programme podcast.

Parlin was tasked with taking the pulse of the US automobile market. He travelled tens of thousands of miles, and interviewed hundreds of car dealers.

After months of work, he presented his employer with what he modestly described as "2,500 typewritten sheets, charts, maps, statistics, tables etc".

Better adverts?

You might wonder which carmaker employed Parlin to conduct this research. Was it, perhaps, Henry Ford, who at the time was busy gaining an edge on his rivals with another innovation - the assembly line?

But no: Ford didn't have a market research department to gauge what customers wanted.

Perhaps that's no surprise. Henry Ford is widely supposed to have quipped that people could have a Model T in "any colour they like, as long as it's black".

In fact, no carmakers employed market researchers.

Charles Parlin
Image captionCharles Parlin was charged with investigating markets to facilitate more effective advertising

Parlin had been hired by a magazine publisher.

The Curtis Publishing Company was responsible for some of the most widely read periodicals of the time: the Saturday Evening Post, The Ladies' Home Journal, The Country Gentleman.

The magazines depended on advertising revenue.

The company's founder thought he'd be able to sell more advertising space if advertising were perceived as more effective, and wondered if researching markets might make it possible to devise better adverts.


'Constructive service'

In 1911, he set up a new division of his company to explore this vaguely conceived idea, headed by Charles Parlin. It wasn't an obvious career move for a 39-year-old high school principal from Wisconsin - but then, being the world's first market researcher wouldn't have been an obvious career move for anyone.

Parlin started by immersing himself in agricultural machinery, then tackled department stores. Not everyone saw value in his activities, at first.

The crowded street outside Selfridges Store in Oxford Street, London, on its opening day, 15 March 1909
Image captionDepartment stores such as Selfridges also had a massive influence on the way people shopped

Even as he introduced his pamphlet The Merchandising of Automobiles: An Address to Retailers, he still felt the need to include a diffident justification of his job's existence.

He hoped to be "of constructive service to the industry as a whole," he wrote, explaining that carmakers spent heavily on advertising, and his employers wanted to "ascertain whether this important source of business was one which would continue". They needn't have worried.

'Consumer-led' approach

The invention of market research marks an early step in a broader shift from a "producer-led" to "consumer-led" approach to business - from making something then trying to persuade people to buy it, to trying to find out what people might buy, and then making it.

The producer-led mindset is exemplified by Henry Ford's "any colour, as long as it's black".

From 1914 to 1926, only black Model Ts rolled off Ford's production line: it was simpler to assemble cars of a single colour, and black paint was cheap and durable.

Henry Ford with one of his Model T cars, pictured in the 1930s
Image captionHenry Ford famously began by selling one type of car available in one colour

All that remained was to persuade customers that what they really wanted was a black Model T. To be fair, Ford excelled at this.

Few companies today would simply produce what's convenient, then hope to sell it.

A panoply of market research techniques helps determine what might sell: surveys, focus groups, beta testing. If metallic paint and go-faster stripes will sell more cars, that's what will get made.

Where Parlin led, others eventually followed.

By the late 1910s, not long after Parlin's report on automobiles, companies had started setting up their own market research departments. Over the next decade, US advertising budgets almost doubled.

George Gallup
Image captionGeorge Gallup pioneered opinion polls in the 1930s

Approaches to market research became more scientific. In the 1930s, George Gallup pioneered opinion polls. The first focus group was conducted in 1941 by an academic sociologist, Robert K Merton.

He later wished he could have patented the idea and collected royalties.

But systematically investigating consumer preferences was only part of the story. Marketers also realised it was possible systematically to change them.

Robert K Merton coined a phrase to describe the kind of successful, cool or savvy individual who routinely features in marketing campaigns: the "role model".

Manufacturing desire

The nature of advertising was changing: no longer merely providing information, but trying to manufacture desire.

Sigmund Freud's nephew Edward Bernays pioneered the fields of public relations and propaganda.

In 1929, he helped the American Tobacco Company to persuade women that smoking in public was an act of female liberation. Cigarettes, he said, were "torches of freedom".

An advert for Lucky Strike cigarettes
Image captionAdverts began to portray smoking and smokers as liberated and modern

Today, attempts to discern and direct public preferences shape every corner of the economy.

Any viral marketer will tell you that creating buzz remains more of an art than a science, but with ever more data available, investigations of consumer psychology can get ever more detailed.

Where Ford offered cars in a single shade of black, Google famously tested the effect on click-through rates of 41 slightly different shades of blue.

Google's logo
Image captionGoogle carried out exhaustive tests on which precise shade of blue performed best

Should we worry about the reach and sophistication of corporate efforts to probe and manipulate our consumer psyches?

The evolutionary psychologist Geoffrey Miller takes a more optimistic view.

"Like chivalrous lovers," Miller writes, "the best marketing-oriented companies help us discover desires we never knew we had, and ways of fulfilling them we never imagined." Perhaps.

Conspicuous consumption

Miller sees humans showing off through our consumer purchases much as peacocks impress peahens with their tails.

Such ideas hark back to an economist and sociologist named Thorstein Veblen, who invented the concept of conspicuous consumption back in 1899.

Charles Coolidge Parlin had read his Veblen. He understood the signalling power of consumer purchases.

"The pleasure car," he wrote in his address to retailers, "is the travelling representative of a man's taste or refinement."

"A dilapidated pleasure car," he added, "like a decrepit horse, advertises that the driver is lacking in funds, or lacking in pride."

What should be the 51st Thing?

The number 51

Tim Harford has discussed 50 things that have made the modern economy. Help choose the 51st by voting for one of these listener suggestions:

  • The credit card
  • Glass
  • Global Positioning System (GPS)
  • Irrigation
  • The pencil
  • The spreadsheet

You can vote on the 50 Things That Made the Modern Economy programme website. Voting closes at 12:00 GMT on Friday 6 October, and the winning 51st thing will be announced in a podcast on 28 October.

In other words, perhaps not someone you should trust as a business associate - or a husband.

Signalling these days is much more complex than merely displaying wealth: we might choose a Prius if we want to display our green credentials, or a Volvo if we want to be seen as safety-conscious.

These signals carry meaning only because brands have spent decades consciously trying to understand and respond to consumer desires - and to shape them.

By contrast with today's adverts, those of 1914 were delightfully unsophisticated.

The tagline of one, for a Model T, said: "Buy it because it's a better car." Isn't that advertisement, in its own way, perfect? But it couldn't last.

Charles Coolidge Parlin was in the process of ushering us towards a very different world.

Source: This article was published bbc.com By Tim Harford

THE WARNINGS CONSUMERS hear from information security pros tend to focus on trust: Don't click web links or attachments from an untrusted sender. Only install applications from a trusted source or from a trusted app store. But lately, devious hackers have been targeting their attacks further up the software supply chain, sneaking malware into downloads from even trusted vendors, long before you ever click to install.

On Monday, Cisco's Talos security research division revealedthat hackers sabotaged the ultra-popular, free computer-cleanup tool CCleaner for at least the last month, inserting a backdoor into updates to the application that landed in millions of personal computers. That attack betrayed basic consumer trust in CCleaner-developer Avast, and software firms more broadly, by lacing a legitimate program with malware—one distributed by a security company, no less.

It's also an increasingly common incident. Three times in the last three months, hackers have exploited the digital supply chain to plant tainted code that hides in software companies' own systems of installation and updates, hijacking those trusted channels to stealthily spread their malicious code.

"There's a concerning trend in these supply-chain attacks," says Craig Williams, the head of Cisco's Talos team. "Attackers are realizing that if they find these soft targets, companies without a lot of security practices, they can hijack that customer base and use it as their own malware install base...And the more we see it, the more attackers will be attracted to it."

According to Avast, the tainted version of the CCleaner app had been installed 2.27 million times from when the software was first sabotaged in August until last week, when a beta version of a Cisco network monitoring tool discovered the rogue app acting suspiciously on a customer's network. (Israeli security firm Morphisec alerted Avast to the problem even earlier, in mid-August.) Avast cryptographically signs installations and updates for CCleaner, so that no imposter can spoof its downloads without possessing an unforgeable cryptographic key. But the hackers had apparently infiltrated Avast's software development or distribution process before that signature occurred, so that the antivirus firm was essentially putting its stamp of approval on malware, and pushing it out to consumers.

That attack comes two months after hackers used a similar supply-chain vulnerability to deliver a massively damaging outbreak of destructive software known as NotPetya to hundreds of targets focused in Ukraine, but also branching out other European countries and the US. That software, which posed as ransomware but is widely believed to have in fact been a data-wiping disruption tool, commandeered the update mechanism of an obscure—but popular in Ukraine—piece of accounting software known as MeDoc. Using that update mechanism as an infection point and then spreading through corporate networks, NotPetya paralyzed operations at hundreds of companies, from Ukrainian banks and power plants, to Danish shipping conglomerate Maersk, to US pharmaceutical giant Merck.

One month later, researchers at Russian security firm Kaspersky discovered another supply chain attack they called "Shadowpad": Hackers had smuggled a backdoor capable of downloading malware into hundreds of banks, energy, and drug companies via corrupted software distributed by the South Korea-based firm Netsarang, which sells enterprise and network management tools. “ShadowPad is an example of how dangerous and wide-scale a successful supply-chain attack can be," Kaspersky analyst Igor Soumenkov wrote at the time. "Given the opportunities for reach and data collection it gives to the attackers, most likely it will be reproduced again and again with some other widely used software component." (Kaspersky itself is dealing with its own software trust problem: The Department of Homeland Security has banned its use in US government agencies, and retail giant Best Buy has pulled its software from shelves, due to suspicions that it too could be abused by Kaspersky's suspected associates in the Russian government.)

Supply-chain attacks have intermittently surfaced for years. But the summer's repeated incidents point to an uptick, says Jake Williams, a researcher and consultant at security firm Rendition Infosec. "We have a reliance on open-source or widely distributed software where the distribution points are themselves vulnerable," says Williams. "That’s becoming the new low-hanging fruit."

Williams argues that move up the supply chain may be in part due to improved security for consumers, and companies cutting off some other easy routes to infection. Firewalls are near-univeral, finding hackable vulnerabilities in applications like Microsoft Office or PDF readers isn't as easy as it used to be, and companies are increasingly—though not always—installing security patches in a timely manner. "People are getting better about general security," Williams says. "But these software supply-chain attacks break all the models. They pass antivirus and basic security checks. And sometimes patching is the attack vector."

'People trust companies, and when they're compromised like this it really breaks that trust. It punishes good behavior.' —Craig Williams, Cisco Talos

In some recent cases, hackers have moved yet another link up the chain, attacking not just software companies instead of consumers, but the development tools used by those companies' programmers. In late 2015, hackers distributed a fake version of the Apple developer tool Xcode on sites frequented by Chinese developers. Those tools injected malicious code known as XcodeGhost into 39 iOS apps, many of which passed Apple's App Store review, resulting in the largest-ever outbreak of iOS malware. And just last week, a similar—but less serious—problem hit Python developers, when the Slovakian government warned that a Python code repository known as Python Package Index, or PyPI, had been loaded with malicious code.

These kinds of supply-chain attacks are especially insidious because they violate every basic mantra of computer security for consumers, says Cisco's Craig Williams, potentially leaving those who stick to known, trusted sources of software just as vulnerable as those who click and install more promiscuously. That goes double when the proximate source of malware is a security company like Avast. "People trust companies, and when they're compromised like this it really breaks that trust," says Williams. "It punishes good behavior."

These attacks leave consumers, Williams says, with few options to protect themselves. At best, you can try to vaguely suss out the internal security practices of the companies whose software you use, or read up on different applications to determine if they're created with security practices that would prevent them from being corrupted.

But for the average internet user, that information is hardly accessible or transparent. Ultimately, the responsibility for protecting those users from the growing rash of supply-chain attacks will have to move up the supply chain, too—to the companies whose own vulnerabilities have been passed down to their trusting customers.

Source: This article was published wired.com By ANDY GREENBERG

Monday, 18 September 2017 03:39

20 of Google’s limits you may not know exist

Don't get caught off guard by limitations you weren't aware of! Columnist Patrick Stox shares 20 Google limitations that may impact SEO efforts.

Google has a lot of different tools, and while they handle massive amounts of data, even Google has its limits. Here are some of the limits you may eventually run into.

1. 1,000 properties in Google Search Console

Per Google’s Search Console Help documentation, “You can add up to 1,000 properties (websites or mobile apps) to your Search Console account.”

2. 1,000 rows in Google Search Console

Many of the data reports within Google Search Console are limited to 1,000 rows in the interface, but you can usually download more. That’s not true of all of the reports, however (like the HTML improvements section, which doesn’t seem to have that limit).

3. Google Search Console will show up to 200 site maps

The limit for the number submitted is higher, but you will only be shown 200. Each of those could be an index file as well, which seems to have a display limit of 400 site maps in each. You could technically add each page of a website in its own site map file and bundle those into site map index files and be able to see the individual indexation of 80,000 pages in each property… not that I recommend this.

4. Disavow file size has a limit of 2MB and 100,000 URLs

According to Search Engine Roundtable, this is one of the errors that you can receive when submitting a disavow file.

5. Render in Google Search Console cuts off at 10,000 pixels

Google Webmaster Trends Analyst John Mueller had mentioned that there was a cutoff for the “Fetch as Google” feature, and it looks like that cutoff is 10,000 pixels, based on testing.

6. Google My Business allows 100 characters in a business name

GMB Name 100 character limit

7. 10 million hits per month per property in GA (Google Analytics)

Once you’ve reached this limit, you’ll either be sampled or have to upgrade.


8. Robots.txt max size is 500KB

As stated on Google’s Robots.txt Specifications page, “A maximum file size may be enforced per crawler. Content which is after the maximum file size may be ignored. Google currently enforces a size limit of 500 kilobytes (KB).”

9. Sitemaps are limited to 50MB (uncompressed) and 50,000 URLs

Per Google’s Search Console Help documentation:

All formats limit a single sitemap to 50MB (uncompressed) and 50,000 URLs. If you have a larger file or more URLs, you will have to break it into multiple sitemaps. You can optionally create a sitemap index file (a file that points to a list of sitemaps) and submit that single index file to Google. You can submit multiple sitemaps and/or sitemap index files to Google.

10. Keep URLs to 2,083 or fewer characters

While Google doesn’t have a limit, you probably shouldn’t go over Internet Explorer’s limit of 2,083 characters in the URL.

11. Google’s crawl limit per page is a couple hundred MBs

That is according to Google’s John Mueller and represents a significant jump from the 10MB limit in 2015.

12. Keep the number of links on a page to a few thousand at most

While Google doesn’t have a hard limit on the number of links per page, they do recommendkeeping it to “a reasonable number,” clarifying that this number is “a few thousand at most.”

13. 5 redirect hops at one time

Google’s John Mueller has said that Googlebot will follow up to five redirects at the same time. I don’t know if anyone has ever looked into the total number Google will follow. I did a little digging in Google Search Console and found one page still showing links as “via intermediate links” with a 10-hop chain. Yes, the original still showed in that case, but I also found some others that were cut off at six hops, even though they had more in the chain. I would say keep it to as few as you can, just in case.

14. No limit on word count on a page

It’s often recommended to keep it to 250 words, but there’s really no limit.

15. Google search limits to 32 words

Google search 32 word limit

Fun fact: Each word is also limited to 128 characters.

16. 16 words on alt text

While there’s not really a limit per se, this test is still live, and only the first 16 seem to count.

17. There is no limit to how many times a site can show on first page

That’s right, one domain can take the entire page if it’s relevant enough. Just check out the example below:

one domain taking all serp positions

18. YouTube maximum upload size is 128 GB or 12 hours

Per the YouTube Help documentation:

The maximum file size that you can upload is 128 GB or 12 hours, whichever is less. We’ve changed the limits on uploads in the past, so you may see older videos that are longer than 12 hours.

19. Google Keyword Planner limits you to 700

You are limited to 700 keywords in Keyword Ideas. This is also the limit when uploading a file to get search volume and trends, but you can upload 3,000 keywords at a time to the forecaster.

20. YouTube’s counter limit

YouTube’s counter used to be a 32-bit integer, limiting the possible video views it would show to a little over 2 billion (2,147,483,647). YouTube now uses a 64-bit integer, which can show ~9.22 quintillion views (9,223,372,036,854,775,808).

Source: This article was published searchengineland.com By Patrick Stox

The realities of the dark net are very different to community expectations, says criminologist James Martin.

Drug trading on the dark net generally elicits images of sinister underworld figures, dimly lit rooms, locked doors and perilous cloak-and-dagger dealings -- a veritable "house of horrors". But according to one leading dark web researcher, the reality is quite different.

In fact, Australian criminologist James Martin believes the dark net has significant potential to promote a more responsible drug culture and lessen the violence associated with street dealing and criminal gangs.

"The conventional illicit drug trade is pretty dangerous and in the absence of a legal, well-regulated drug market, it looks like the dark net actually offers a lot of potential benefits both to users and dealers, and to the general public," he told HuffPost Australia.

The dark net is not actually all that dark."

A senior lecturer in criminology at Macquarie University, Martin will be presenting his subversive take on the online drug market at TedxMelbourne's 'Rebels, Revolutionaries & Us' next Tuesday, September 19.

So just what is the dark net, and how are people using it to buy illegal drugs?

What is the...

Surface web (or clear net) - "That's basically any part of the Internet that you can access through a search browser (i.e. it comes up in a Google search), so you're talking media websites, your Hotmail or whatever it may be."

Deep web - "Beneath the surface web, you've got the deep web. This is the vast majority of the Internet, and it's basically anything that isn't immediately accessible through a search browser, such as Intranets. It's not necessarily anything nefarious."

Dark net (or anonymous web) - "Beneath that, you've got the dark net or the TOR network... this is a different thing entirely. It's an encrypted subset of the Internet that's only accessible through a TOR browser... Once you start using this browser, you can access a whole lot of different websites that you can't access otherwise -- dot onion sites. You can also send, host and receive information without revealing your IP address, so you can't be monitored by the authorities."

Because it's almost untraceable, the dark net is used by many parts of the criminal underworld, including by terror groups and for exchanging child exploitation material, illegal firearms and stolen credit cards, but the bulk of the people on it are seeking illegal drugs.


To access the dark net, users download an encrypted web browser -- the TOR network being the most common -- which provides access to illicit websites. These sites are protected, so they won't appear in a Google search or through typing a URL into a regular web browser.

Users' data and IP addresses are encrypted, making them very difficult to trace.

But researchers actually know more about the types, quantities and price of drugs sold on the dark net than they do of physical illicit drug supply lines.

"One of the weird things about the dark net is that it's not actually all that dark," Martin said.

"These sites are publicly available and anyone can download TOR and see what they look like."

One fifth of the illicit drugs sold on the dark net are prescription medications.

For example, we know that cannabis is the most common dark net drug in Australia, accounting for a quarter of all sales, followed by prescription drugs (20 percent) and ecstasy (16 percent). Methamphetamines such as 'ice' account for 12 percent of Australia's online drug trades, while heroin makes up just three percent.

Illicit drugs are paid for using bitcoin or another encrypted currency, and mailed through the post.

But despite this accessibility, law enforcement agencies have found it extremely difficult to crack down on the online trade.

"Conventional anti-drug operations usually revolve around things like buy-and-bust operations where you've got an uncover police officer who pretends to be a customer and once an exchange takes place then they can affect an arrest," Martin explained.

"That's a really simple but very efficient kind of police operation when you think of it from an evidence perspective.

"You've got the offender, you've got the drugs, you've got the money, and usually you've got some sort of form of surveillance as well and that makes a very compelling package that you can present in court."

But on the dark net, this kind of police operation isn't possible.

Both the communications and the financial transactions of drug deals are encrypted, and buyers and users need never meet in person.

The drugs are commonly sold in small quantities and are frequently mailed across international borders, making tracing them through the postal service both costly and impracticable.

"For a transnational policing operation to take police -- so say someone buying drugs from the UK to Australia –- that requires a lot of international police cooperation and law enforcement. It would be difficult to justify the kind of expense associated with that for small quantities of illicit drugs," Martin explained.

The criminologist has been researching the dark net drug trade since it first made news headlines with the establishment of the Silk Road in 2011.

Silk Road creator Ross Ulbricht (who went by the pseudonym Dread Pirate Roberts) has been sentenced to life imprisonment without parole for creating the underground drug-trading site.

Since then, he's watched the disintegration of major drug suppliers -- most notably, Silk Road in 2013 -- as policed were able to trace them through real-world links.

But instead of the life imprisonment of Silk Road creator Ross Ulbricht scaring dealers away from the dark net, large-scale syndicates have been replaced by smaller, harder-to-trace operations. Martin estimates that Australia alone has around 150 online traders.

So how can a more accessible illegal drug store where dealers operate with a large degree of impunity create a less harmful illicit drug trade?

According to Martin, there are three potential benefits: reducing the violence of a bloody drugs war; promoting safer drug-taking practices through online forums; and supplying purer drugs, with fewer potentially deadly adulterants.

The decreased potential for violence stems from the anonymity of the Internet -- not only are the dealers' locations concealed from police, they're also hidden from each other. This makes revenge killings and drive-by shootings impossible.

"What the dark net does is basically protect people from that kind of violence, because no one knows where anyone is physically located," Martin explained.

By cutting out the street dealers and other middle-men, Martin believes the online dark net trade can also reduce the involvement of organised crime in the drug trade -- "or at least change the composition of the groups involved".

This would have flow-on benefits for the wider community, but creating a less armed, less violent society.

Listen to the full interview with James Martin below.

But Martin also sees potential benefits for the users sourcing their illicit drugs on the dark net, who Martin says are generally relatively tech-savvy, affluent and well-educated -- and eager to promote a culture of harm reduction.

Despite several recent high-profile arrests and large-scale drug seizures, a recent report by the Australian Criminal Intelligence Commission revealed that law enforcement operations are having almost no impact on the availability, price and purity of drugs like crystal methamphetamine ('ice').

This failure of police to end the drug war has lead to calls for an approach based around harm reduction, rather than punitive measures -- an approach facilitated by the dark net.

Drugs ordered on the dark net are delivered through the post, cutting out the middle man.

"All of the cryptomarkets and dark net marketplaces that we see have very active discussion forums with a large percentage of that discussion centred around things like safer forms of drug use," Martin said.

He points to the original Silk Road website, which featured a weekly Q&A session by an anonymous doctor, paid in bitcoins to answer users questions about how to use illicit drugs more safely.

Moreover, because of the eBay-esque rating and feedback system employed by dark net drug traffickers, the drugs are generally of higher quality and less likely to contain potentially deadly adulterants.

"That can be dangerous as well," Martin cautioned.

"If you've got, for example, very strong ecstasy pills floating around there's a higher potential for overdose. But people typically have a better knowledge of the composition of their drugs (online)."

Whatever the benefits and the drawbacks, it's clear that despite law enforcements' best efforts, the dark net drugs trade isn't going anywhere any time soon.

The industry is already worth hundreds of millions of dollars globally, and is growing fast.

Australia is the one of the highest rates of dark net drug dealers per capita in the world, beaten only by the Netherlands. More than a quarter (27 percent) of the world's dark net 'ice' trade is sold through Australian cryptomarket dealers.

The 2016 Global Drug Survey found that eight percent of Australian respondents (around 80 percent of whom report using illicit drugs) have bought drugs off the dark net.

"There will always be that physical market, but if current trends continue, then we're going to see very significant increases in dark net drug trading," Martin concluded.

Source: This article was published huffingtonpost.com.au By Lara Pearce

Wednesday, 13 September 2017 07:11

How People Approach Facts and Information

People deal in varying ways with tensions about what information to trust and how much they want to learn. Some are interested and engaged with information; others are wary and stressed

When people consider engaging with facts and information any number of factors come into play. How interested are they in the subject? How much do they trust the sources of information that relate to the subject? How eager are they to learn something more? What other aspects of their lives might be competing for their attention and their ability to pursue information? How much access do they have to the information in the first place?

A new Pew Research Center survey explores these five broad dimensions of people’s engagement with information and finds that a couple of elements particularly stand out when it comes to their enthusiasm: their level of trust in information sources and their interest in learning, particularly about digital skills. It turns out there are times when these factors align – that is, when people trust information sources and they are eager to learn, or when they distrust sources and have less interest in learning. There are other times when these factors push in opposite directions: people are leery of information sources but enthusiastic about learning.

Combining people’s views toward new information ­­­– and their appetites for it – allows us to create an “information-engagement typology” that highlights the differing ways that Americans deal with these cross pressures. The typology has five groups that fall along a spectrum ranging from fairly high engagement with information to wariness of it. Roughly four-in-ten adults (38%) are in groups that have relatively strong interest and trust in information sources and learning. About half (49%) fall into groups that are relatively disengaged and not very enthusiastic about information or about gaining more training, especially when it comes to navigating digital information. Another 13% occupy a middle space: They are not particularly trusting of information sources, but they show higher interest in learning than those in the more information-wary groups.

Here are the groups:

The Eager and Willing – 22% of U.S. adults

At one end of the information-engagement spectrum is a group we call the Eager and Willing. Compared with all the other groups on this spectrum, they exhibit the highest levels of interest in news and trust in key information sources, as well as strong interest in learning when it comes to their own digital skills and literacy. They are not necessarily confident of their digital abilities, but they are anxious to learn. One striking thing about this group is its demographic profile: More than half the members of this group are minorities: 31% are Hispanic; 21% are black and 38% are white, while the remainder are in other racial and ethnic groups.

The Confident – 16% of adults

Alongside the Eager and Willing are the Confident, who are made up of the one-in-six Americans and combine a strong interest in information, high levels of trust in information sources, and self-assurance that they can navigate the information landscape on their own. Few feel they need to update their digital skills and they are very self-reliant as they handle information flows. This group is disproportionately white, quite well educated and fairly comfortable economically. And one-third of the Confident (31%) are between the ages of 18 and 29, the highest share in this age range of any group.

The Cautious and Curious – 13% of adults

The Cautious and Curious have a strong interest in news and information, even though they do not have high levels of trust in the sources of news and information – particularly national news organizations, financial institutions and the government. But they are interested in growth, with a great deal of interest in improving digital skills and literacy. This group differs very little from the general population’s average, although its members have somewhat lower levels of educational attainment than the mean.

The Doubtful – 24% of adults

The Doubtful are less interested in news and information than those in the previous groups. They are leery of news and information sources, particularly local and national news. They also have very busy lives, which could be why they also show little interest in updating their digital skills or information literacy. The Doubtful are the most middle-aged of the groups. They tilt towards being white and they are also relatively well-educated and above average in their economic status.

The Wary – 25% of adults

At the edge of the spectrum are the Wary. They are the least engaged with information. They have very low interest in news and information, low trust in sources of news and information and little interest in acquiring information skills or literacies. That places them at a distance from other Americans in terms of engagement with information. This group is heavily male (59%) and one-third are ages 65 or older.

What are the implications of the typology, especially for issues tied to digital divides and information literacy?

Typologies are useful because they add to the insights that can be gained by doing traditional analysis by demographics – such as gender, race, class, age and educational attainment.

One key takeaway from these typology findings is that there is not a “typical,” archetypal information consumer.  A variety of factors shape people’s engagement with information. There is clear variation among citizens about their interest in information, trust in various sources and their eagerness to gain further skills dealing with information.

This typology suggests that one size does not fit all when it comes to information outreach. For instance, information purveyors might need to use very different methods to get material to the Eager and Willing, who are relatively trusting of institutional information and eager to learn, compared with the tactics they might consider in trying to get the attention of the Cautious and Curious, who are open to learning but relatively distrusting of institutional information. Similarly, groups with messages might want to plan wholly different processes to reach the Confident (who are basically information omnivores), compared with the Wary (who are quite reluctant to engage with new material).

Secondly, the typology highlights the challenges faced by those focusing on digital divides and information literacy as they try to help people improve their access to information and find trustworthy material. On the one hand, significant numbers of people are interested in building digital skills and information literacy. On the other hand, about half of adults fall into the groups we call the Doubtful and the Wary, who have lower interest in getting assistance to help them get to more trustworthy material.

And a third takeaway from the typology highlights how useful it would be if there were trusted institutions helping people gain confidence in their digital- and information-literacy skills. Libraries might be relevant here. Library users stand out in their information engagement. Overall, about half (52%) of adults have visited a public library or connected with it online in the past year. Those library users are overrepresented in the two most information-engaged groups. Some 63% of the Eager and Willing were library users in the past year, while this is true for 58% of the Confident. Additionally, both groups are much more likely than others to say they trust librarians and libraries as information sources.

At the same time, some words of caution are warranted. First, as broad as they were, the questions in this survey did not cover the vast range of people’s connection to information and use of it. Nor did they comprehensively probe people’s attitudes about learning and personal growth. The poll covered particular contexts and it focused on digital access to information. Thus, the results are not projectable to all aspects of people’s vast experiences with media and information.

Another caution: While there are numerical descriptions of the groups, there is some fluidity in the boundaries of the groups. Unlike many other statistical techniques, cluster analysis does not require a single “correct” result. Instead, researchers run numerous versions of it (e.g., asking it to produce different numbers of clusters) and judge each result by how analytically practical and substantively meaningful it is. Fortunately, nearly every version produced had a great deal in common with the others, giving us confidence that the pattern of divisions was genuine and that the comparative shares of those who are relatively engaged and relatively wary of information are generally accurate.

A third caution is that the findings represent a snapshot of where adults are today in a changing information ecosystem. The groupings reported here may well change in the coming years as people’s comfort and confidence with accessing information digitally evolve and as technologists offer new ways for people to encounter and create information.

Even allowing for those caveats, these findings add insight to swirling debates about how people think about and use information.

Source: This article was published pewinternet.org By JOHN B. HORRIGAN

Page 2 of 7


Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media