It takes less than a minute to opt-out of Facebook's new ads system.

Facebook member or not, the social networking giant will soon follow you across the web -- thanks to its new advertising strategy.

From today, the billion-plus social network will serve its ads to account holders and non-users -- making one giant push in the same footsteps as advertising giants like Google, which has historically dominated the space.

In case you didn't know, Facebook stores a lot of data on you. Not just what you say or who you talk to (no wonder it's a tempting trove of data for government surveillance) but also what you like and don't like. And that's a lot of things, from goods to services, news sites and political views -- not just from things you look at and selectively "like" but also sites you visit and places you go. You can see all of these "ad preferences" by clicking this link.

Facebook now has the power to harness that information to target ads at you both on and off its site.

In fairness, it's not the end of the world -- nor is it unique to Facebook. A lot of ads firms do this. Ads keep the web free, and Facebook said that its aim is to show "relevant, high quality ads to people who visit their websites and apps."

Though the company hasn't overridden any settings, many users will have this setting on by default, meaning you'll see ads that Facebook thinks you might find more relevant based on what it knows about you.

The good news is that you can turn it off, and it takes a matter of seconds.


Head to this link (and sign in if you have to), then make sure the "Ads on apps and websites off of the Facebook Companies" option is turned "no."

And that's it. The caveat is that you may see ads relating to your age, gender, or location, Facebook says.

You can also make other ad-based adjustments to the page -- to Facebook's credit, they're fairly easy to understand. The best bet (at the time of publication) is to switch all options to "no" or "no-one."


Given that this also affects those who aren't on Facebook, there are different ways to opt-out.

iPhones and iPads can limit ad-tracking through an in-built setting -- located in its Settings options.

Android phones also have a similar same setting -- you can find out how to do it here.

As for desktops, notebooks, and some tablets, your best option might be an ad-blocker.

But if you want to be thorough, you can opt-out en masse from the Digital Advertising Alliance. The website looks archaic, and yes, you have to enable cookies first (which seems to defeat the point but it does make sense, given these options are cookie-based) but it takes just a couple of minutes to opt-out.

Source:  http://www.zdnet.com/article/to-stop-facebook-tracking-you-across-the-web-change-these-settings/

Categorized in Internet Privacy


Most major PC makers are shipping their desktops and notebooks with pre-installed software, which researchers say is riddled with security vulnerabilities.


A highly-critical report by Duo Security released Tuesday said Acer, Asus, Dell, HP and Lenovo all ship with software that contains at least one vulnerability, which could allow an attacker to run malware at the system-level -- in other words, completely compromising an out-of-the-box PC.



The group of PC makers accounted for upwards of 38 million PCs shipped in the first quarter of the year, according to estimates garnered from IDC's latest count.


The vast majority of those will be sold to consumers, and most of those will come with some level of system tool used to monitor the computer's health or processes. This so-called bloatware -- also known as junkware or crapware -- is preinstalled software that lands on new PCs and laptops, and some Android devices. Often created by the PC maker, it's usually deeply embedded in the system and difficult to remove.


PC makers install the software largely to generate money on low-margin products, despite it putting system security at risk.


"We broke all of them," said Duo researchers in a blog post. "Some worse than others."

Every PC maker that was examined had at least one flaw that could have let an attacker grab personal data or inject malware on a system through a man-in-the-middle attack.



One of the biggest gripes was the lack of TLS encryption used by the PC makers, which creates a secure tunnel for files and updates to flow over. Updating over HTTPS makes it difficult, if not impossible, to carry out man-in-the-middle attacks.


Of the flaws, Acer and Asus scored the worst with signed manifest and update files over unencrypted connections, potentially allowing an attacker to inject malware code as it's being downloaded. By not using code-signing checks, an attacker can trivially modify or replace files and manifests in transit, said the corresponding report.


The flaws are such easy targets, the researchers said the "average potted plant" could exploit the flaws.

Duo's researchers found a total of 12 separate vulnerabilities, with half of those rated "high," indicating a high probability of exploitation.


Most of higher-priority flaws were fixed, but Asus and Acer have yet to offer updates.


The researchers said users should wipe and reinstall "a clean and bloatware-free copy of Windows before the system is used, otherwise, reducing the attack surface should be the first step in any system-hardening process."


A Dell spokesperson said Wednesday that, "customer security is a top priority" for the company. "We fared comparatively well in their testing and continue to test our software to identify and fix outstanding vulnerabilities as we examine their findings more closely."


Lenovo said in a statement: "Upon learning of the vulnerability, Lenovo worked swiftly and closely with Duo Security to mitigate the issue and a publish a security advisory (which can be found here." The spokesperson also said a System Update removal utility "will soon be available."


Acer, Asus, and HP did not respond to a request for comment.


Source:  http://www.zdnet.com/article/hp-dell-acer-asus-bloatware-security-flaws-vulnerabilities/




Categorized in Internet Privacy

A prospective client had something to hide when she claimed no previous involvement in an industry rife with fraud. This claim stated in conjunction with the submission of an informed business plan rang false. Other clues about her integrity worried the lawyer. He soon suspected that she was a dishonest person. After the meeting, he consulted another partner, who in turn delivered the puzzle to my e-mail inbox. My mission was to fit the mismatched pieces of information together, either substantiating or disproving the lawyer's skepticism.

Internet Archive to the Rescue

Wanting to emphasize the importance of retaining knowledge of history, George Santayana wrote the words made famous by the film, Rise and Fall of the Third Reich--"Those who cannot remember the past are condemned to repeat it." Of course, at the time the Internet Archive didn't exist; nor did the Information Age. If it had, perhaps he would have edited his philosophy to state, "Those who cannot discover the past are condemned to repeat it."

Certainly in times when new information amounts to five exabytes, or the equivalent of "information contained in half a million new libraries the size of the Library of Congress print collections" (How Much Information 2003?), it is perhaps fortunate that librarians possess a knack for discovering information. It is also in our favor that Brewster Kahle and Alexa Internet foresaw a need for an archive of Web sites.
Internet Archive and the Wayback Machine

Founded in 1996, the Internet Archive contains about 30 billion archived Web pages. While always open to researchers, the collection did not become readily accessible until the introduction of the Wayback Machine in 2001. The Wayback Machine enables finding archived pages by their Web address. Enter a URL to retrieve a dated listing of archived versions. You can then display the archived document as well as any archived pages linked from it.

The Internet Archive helped me successfully respond to the concerns the lawyers had about the prospective client. It contained evidence of a business relationship with a company clearly in the suspect industry. Broadening the investigation to include the newly discovered company led to information about an active criminal investigation.

Suddenly, the pieces of the puzzle came together and spelled L-I-A-R.
Using the Internet Archive should be a consideration for any research project that involves due diligence, or the careful investigation of someone or something to satisfy an obligation. In addition to people and company investigations, it can assist in patent research for evidence of prior art, or copyright or trademark research for evidence of infringement. It can also come in handy when researching events in history, looking for copies of older documents like superseded statutes or regulations, or when seeking the ideals of a former political administration. (Note: 25 October 2004.

A special keyword search engine, called Recall Search, facilitates some of these queries. Unfortunately, it was removed from the site during mid-September. Messages posted in the Internet Archive forum indicate they plan to bring it back. Note: 15 June 2007. I think it's safe to assume that Recall Search is not coming back. However, check out the site for developments in searching archived audio (music), video (movies) and text (books).)

Recall Search at the Internet Archive

But while the Internet Archive contains information useful in investigative research, finding what you want within the massive collection presents a challenge. If you know the exact URL of the document, or if you want to examine the contents of a specific Web site--as was the case in the scenario involving the prospective client--then the Wayback Machine will suffice. But searching the Internet Archive by keyword was not an option until recently. (Note: See the note in the previous paragraph.)

During September 2003, the project introduced Recall Search, a beta version of a keyword search feature. Recall makes about one-third, or 11 billion, Web pages in the archived collection accessible by keyword. While it further facilitates finding information in the Internet Archive, it does not replace the Wayback Machine. Because of the limited size of the keyword indexed collection and the problems inherent in keyword searching, due diligence researchers should use both finding tools.
Recall does not support Boolean operators. Instead, enter one or more keywords (fewer is probably better) and, if desired, limit the results by date.

Results appear with a graph that illustrates the frequency of the search terms over time. It also provides clues about their context. For example, a search for my name limited to Web pages collected between January 2002 and May 2003 finds ties to the concepts, "school of law," "government resources," "research site," "research librarian," "legal professionals" and "legal research." The resulting graph further shows peaks at the beginning of 2002 and in the spring of 2003.

Applying content-based relevancy ranking, Recall also generates topics and categories. Little information exists about how this feature works, and I have experienced mixed results. But the idea is to limit results by selecting a topic or category relevant to the issue.

Suppose you enter the keyword, Microsoft. The right side of the search results page suggests concepts for narrowing the query. For example, it asks if instead you mean Microsoft Windows, Microsoft Internet Explorer, Microsoft Word, and so on. Likewise, a search for turkey suggests wild turkey, the country of Turkey, turkey hunting, roast turkey and other interpretations.

While content-based relevancy ranking can be a useful algorithm, it is far from perfect. Some topics and categories generated might not seem to make sense. If the queries you run do not produce satisfactory results, consider another approach.

Pinpoint the specific sites you want to investigate by first conducting the research on the Web. In the prospective client example, an old issue of the newsletter of the company under criminal investigation (Company A) mentioned the prospective client's company (Company B). This clue led us to Company A's Web site where we found no further mention of Company B. However, with the Web site address in hand, we reviewed almost every archived page at the Internet Archive and found solid evidence of a past relationship. Additional research, during which we tracked down court records and spoke to one of the investigators, provided the verification we needed to confront the prospective client.

Advanced Search Techniques

You can display all versions of a specific page or Web site during a certain time period by modifying the URL. Greg Notess first illustrated this strategy in his On The Net column (See "The Wayback Machine: The Web's Archive," Online, March/April 2002).
A request for all archived versions of a page looks like this:

The asterisk is a wildcard that you can modify. For example, to find all versions from the year 2002, you would enter:
Or to find all versions from September 2002, you would enter:
Sometimes you encounter problems when you browse pages in the archive. For example, I often receive a "failed connection" error message. This may be the result of busy Web servers or a problem with the page. It may also occur if the live Web site prohibits crawlers.

To find out if the latter issue is the problem, check the site's robot exclusion file. A standard honored by most search engines, the robot exclusion file resides in the root-level directory. To find it, enter the main URL in your browser address line followed by robots.txt. Like this: http://www.domain.com/robots.txt .
If the site blocks the Internet Archive's crawler, it will contain two lines of text similar to the following:
User-agent: ia_archiver
Disallow: /
If it forbids all crawlers, the commands should look like this:
User-agent: *
Disallow: /

It's common for Web sites to block crawlers, including the Internet Archive, from indexing their copyrighted images and other non-text files. If the Internet Archive blots out images with gray boxes, then the Web site probably prevents it from making the graphics available.

If the site does not appear to block the Internet Archive, don't give up when you encounter a "failed connection" message. Return to the Wayback Machine and enter the Web page address. This strategy generates a list of archived versions of the page whereas Recall presents specific matches to a query. One of the other dated copies of the page may load without problems.


While the Internet Archive does not contain a complete archive of the Web, it offers a significant collection that due diligence researchers should not overlook. Tools like the Wayback Machine and Recall Search provide points of access. However, these utilities only handle simple queries. You can search by Web page address or keyword. You cannot conduct Boolean searching or limit a query by key information. Moreover, Recall Search limits keyword access to one-third of the collection. Consequently, conduct what research you can elsewhere first using public Web search engines and commercial sources. Then use the information you discover to scour relevant sites in the Internet Archive.

Source:  http://virtualchase.justia.com/content/internet-archive-and-search-integrity

Categorized in Internet Technology

Have you ever tracked all the ways you use data in a single day? How many of your calories, activities, tasks, messages, projects, correspondences, records and more are saved and accessed through data storage every day? I bet you won’t be able to stop once you start counting.

Many of us never pause to consider what that means, but data is growing exponentially — with no end in sight. There are already more than a billion cellphones in the world, emitting 18 exabytes (1 billion gigabytes) of data every month. As more devices continue to connect to the Internet of Things, sensors on everything from automobiles to appliances increase the data output even more.

By 2020, IDC predicts that the amount of data will increase by a thousandfold, reaching a staggering 44 zettabytes of data. The only logical response to this data deluge is to create more ways to store and maximize all this information.

Artificial intelligence and machine learning have become major areas of research and development in recent years as a response to this data flood, as algorithms work to find patterns that can help manage the data. While this is a step in the right direction in terms of learning from data, it still doesn’t solve the storage problem. And while interesting advances are being made in data storage on DNA molecules, for now, realistic data storage options are still a little less sci-fi sounding. Here are four viable solutions to our storage capacity woes.

The hybrid cloud

We all understand the concept of the cloud. Hybrid cloud storage is a little different though, in that it uses both storage in the cloud as well as on-site storage or hardware. This creates more value through a “mash-up” that accesses either kind of storage, depending on the security and the need for accessibility.

A hybrid data storage solution addresses common fears about security, compliance and latency that straight cloud storage raises. Data can be housed either onsite or in the cloud, depending on risk classification, latency and bandwidth needs. Enterprises that choose hybrid cloud storage are drawn to it because of its scalability and cost-effectiveness, combined with the option of keeping sensitive data out of the public cloud.
All flash, all the time

Flash data storage is the most common form widely used in consumer tech, including cell phones. Unlike traditional storage, which stores info on discs, flash stores and accesses info directly from a semiconductor. With flash prices continuing to fall as the technology is able to store more info in the same amount of space, flash makes sense for a lot of medium-sized enterprises.

Recent breakthroughs by data storage company Pure Storage aim to scale flash to the next level, making it a real contender for large enterprises in the big data storage war. Pure took its all-flash approach to storage with FlashBlade, a box designed to store petabytes of unstructured data in an unprecedented scale. The refrigerator-sized box can store up to 16 petabytes of data, and co-founder John Hayes believes that amount can be doubled by 2017. Sixteen petabytes is already five times as much data as traditional storage devices, so clearly Pure’s scalable blade approach is a step in the right direction.


Intelligent Software Designed Storage (I-SDS) removes the need for cumbersome proprietary hardware stacks that are generally associated with data storage, and replaces them with storage infrastructure that is managed and automated by intelligent software, rather than hardware. I-SDS is also more cost efficient, with faster response times, than storing data on hardware.

I-SDS moves toward a storage design that mimics how the human brain stores vast amounts of data with the unique ability to call it up at a moment’s notice. Essentially, I-SDS allows big data streams to be clustered. Approximate search and the stream extraction of data combine to allow the processing of huge amounts of data, while simultaneously extracting the most frequent and appropriate outputs from the search. These techniques give I-SDS a huge advantage over obsolete storage models because they team up to improve speed while still achieving high levels of accuracy.

Cold storage archiving

Cold storage is economical, if not often used. By keeping on slower moving and less expensive disks data that doesn’t need to be readily available, space is freed up on faster disks for information that does need to be readily available. This option makes sense for large enterprises with backlogged info that doesn’t need to be readily accessed regularly.

Such enterprises can store their data based on its “temperature,” keeping hotter data on flash, where it can be more quickly accessed, and archived info in cost-effective cold storage archives. However, the deluge of big data means that enterprises are gleaning so much data at once that knowing what is valuable and what can be put on the back burner isn’t always clear.

Bigger data, smarter storage

While the sheer volume of data continues to grow exponentially, so too does its perceived value to companies eager to glean information about their consumers and their products. Data storage needs to be fast, intuitive, effective, safe and cost-effective — a tall order in a world where data now far outpaces the population. It will be interesting to see which method can best address all these needs simultaneously.

Source: http://techcrunch.com/2016/05/22/how-storage-is-changing-in-the-age-of-big-data/

Categorized in Internet Technology

Is everyone’s website illegal?

Your website consists of visible text and graphics, geared to the sighted reader. Its terms and conditions include legal disclaimers and limitations of liability, which, it explains, apply unless they are specifically prohibited by law. As a service to the public, you have posted scores of videos providing useful information for consumers in your industry.
Are these common practices illegal?

Some class action lawyers say so. They’ve been making claims against standard websites that they claim violate the federal Americans with Disabilities Act or a New Jersey consumer protection statute.
Class actions targeting website practices aren’t unusual. In the early days of the commercial Internet, many companies were sloppy with their website terms and privacy policies. Most notably, high flying dot-com companies that promised never to sell their customer data were caught flat-footed when the bubble burst. In liquidation, their customer lists were their most saleable assets, which they then usually sold, in violation of their prior promises.
Cases from that era showed the legal vulnerability of disconnects between website promises and actual business practices.

Similarly, when web technologies ran ahead of website disclosures, as allegedly occurred in some cases with behavioral advertising, customer tracking, and information sharing practices, the class action lawyers pounced then too. On multiple occasions in 2010 and 2011, the Wall Street Journal’s “What They Know” series would run articles about customer tracking on the Internet, and, the very next day, class action suits were filed keyed to the practices revealed by the Journal.

The ADA and New Jersey suits appear to be the newest wave of Internet class actions —ones that have the potential to reach thousands if not millions of website operators.

Is the Internet expanding privacy expectations?

Internet privacy - e-mails
Is the Internet invading privacy, or expanding privacy? The conventional wisdom is that the Internet is eviscerating privacy. But in some ways a heightened focus on privacy in the digital era may be creating new and greater privacy expectations.

Consider the simple matter of lists of addressees and cc’s on emails.
In the ancient days of postal mail, it was never a big deal if a sender revealed, on a letter, the other persons to whom he or she were sending the same letter, or to whom he or she was sending copies of that letter. Lots of letters show multiple addressees or multiple persons copied. That indeed explains the origin of the “cc” field, as a visible list of the persons to whom a “carbon copy” (another ancient term) was being sent.

But an expectation has developed recently that one should never send out a mass email that reveals the email addresses of all of the recipients, unless they previously knew one another. That is why the Federal Trade Commission was so red-faced recently when, in the course of preparation for its first PrivacyCon workshop on privacy research, it sent an email out to all attendees revealing – horrors! – all of their email addresses. The agency “sincerely apologized” for this terrible mistake

The presumed confidentiality of one’s email address is seen in other laws. College professors, for example, are instructed never to communicate with an entire class of students by placing all of their email addresses in the “to” field; rather, they must use the “bcc” field, so that no student receives his or her fellow students’ email addresses, which some of them may have designated as confidential personal information under the Family Educational Rights and Privacy Law.

Though the prohibition against letting strangers see others’ email addresses in group emails now seems to be settled, the presumed harm to be avoided — use of those email addresses for bulk commercial emails — is fairly speculative, and such a misuse, if it occurred, would seem to cause more of an inconvenienced harm than a true privacy invasion. The same goes for concerns about reply-all “catastrophes,” such as the one that hit Thompson Reuters employees in August 2015. The event inconvenienced employees, but its true lasting impact appeared to be a flood of humorous Twitter traffic. (Another reply-all incident struck Time Inc. just this week.)

The best explanation for this new expectation, rather, seems to be an expanding understanding of privacy, at least in certain areas. Contrary to the conventional wisdom, our expectations of privacy are not steadily and uniformly shrinking. In some cases, they are expanding.

Can you be sued for posting your opinions on the Internet?

A restaurant tells customers it may sue them if they post unfavorable reviews on the Internet. A flooring company sues a customer who complained on social media that he had an “absolutely horrible experience” with the company.
Klear-Gear, a gadget company, included in its Internet terms a provision that “your acceptance of this sales contract prohibits you from taking any action that negatively impacts KlearGear.com, its reputation, products, services, management or expenses.” The terms also set damages for a violation: $3,500.

If something seems wrong to you about these cases, you are not alone. While libel law has struggled for years with the dividing line between expressions of actionable fact and constitutionally protected opinion, most laypeople, and judges, believe that statements of opinion should be protected, and broadly construed.
That may be why Grill 225, a restaurant in Charleston, met with such opposition when its scheme for suppressing unfavorable reviews was recently publicized. The restaurant required persons booking online reservations to agree to terms and conditions in which, among other things, the customer agreed “that they may be held legally liable for generating any potential negative, verbal, or written defamation against Grill 225.

Most efforts to prevent or penalize Internet comments and criticism are crushed in the court of public opinion even before they reach the courthouse. Grill 225, for example, is really only stating the obvious when it says that it could sue a customer. It wisely hasn’t done so in the two years that it has posted its terms. The flooring company, in Colorado, did sue its customer, but the case provoked a state legislator to propose stronger protection against suits aimed at chilling free speech.

Indeed, last year California passed a so-called “Yelp Bill” that prohibited businesses from including in their contracts “a provision waiving the consumer’s right to make any statement regarding the seller or lessor or its employees or agents, or concerning the goods or services.” A similar bill, the proposed Consumer Review Freedom Act, has been introduced in Congress.

When cases do get to court, even under existing law, statements of opinion are generally protected. As one example, consider a case involving presidential candidate Donald Trump, back in the early 1980s, when he announced an audacious plan to build the tallest building in the world, a 150-story skyscraper, on landfill just south of downtown Manhattan.

Trump’s plan met opposition in Chicago, then home of the world’s tallest building, the 108-story Sears Tower (now Willis Tower). Specifically, the Chicago Tribune’s architecture critic, Paul Gapp, analyzed Trump’s proposal in a review and deemed it, among other things, “one of the silliest things anyone could inflict on New York or any other city." Gapp’s review was accompanied by a Tribune artist’s rendering of southern Manhattan with a giant new building, a Sears Tower lookalike on steroids, sticking out like a sore thumb below and east of Battery Park.

Trump, no more shy then than he is now, immediately sued the Tribune, seeking damages to the tune of $500 million. I worked for the Tribune’s law firm and had the task of writing the motion to dismiss Trump’s case. There were plenty of good legal authorities on the right of critics to express their opinions, but I decided to prepare our brief a bit differently.

Experts make privacy regulation a serious threat

Now is the time to get smart about privacy and technology, because your government regulators are smart and savvy in those areas.

No, that’s not a misprint. Though government regulators are often far behind on the technology curve, real experts have taken over at several important agencies that regulate conduct on the Internet.
Take Ashkan Soltani, who took over in late 2014 as Chief Technologist for the Federal Trade Commission. Just by hiring a chief technologist, the FTC showed awareness of the need for deep computer expertise to effectively regulate privacy and commercial practices on the Internet. And by hiring Soltani, one of the sharpest computer privacy experts in country, the FTC showed it was serious.

Soltani was one of a handful of computer experts who have been at the forefront of studying privacy on the Internet. Along with his former colleagues at Berkeley, and like-minded researchers, especially at Stanford and Carnegie-Mellon universities, Soltani has identified and publicized many previously unknown ways in which the Internet allows personal information to be collected, used and commercialized.
Soltani and his colleagues haven’t just quietly studied Internet privacy. They’ve been active and savvy in getting the word out on their studies.

To take one example, a few years back, most website operators thought they had satisfied their disclosure obligations if they told their users that they honored users’ instructions with respect to HTTP “cookies” (datasets that identify previous browsing activity). But in an important research report in 2009, Soltani and colleagues reported that even when users deleted HTTP cookies, in an attempt to shield knowledge of their previous browsing activity, some websites, by activating Flash cookies (often tied to web video files) would automatically regenerate those HTTP cookies – a generally unintended result, but one that cast doubt on the company’s privacy promises. Soltani followed this up with reports on other pervasive tracking technologies.

Soltani and his colleagues and co-authors, many of whom, like him, are motivated by their need for more privacy protection, focused their research on exposing technologies (like Flash cookies) that collected or revealed information that consumers thought was private. Many of their research projects became the foundation of class action lawsuits against companies that made privacy promises in ignorance of these technologies.

And it wasn’t a coincidence that Soltani’s research was used in class action cases. He served as technology adviser to the Wall Street Journal for its widely read “What They Know” series that has brought many Internet privacy issues to widespread attention, beginning in 2010. In several instances, a flurry of class action suits followed within days of the Journal’s Soltani-supported articles.

Soltani isn’t the only technology whiz to join the government from the Berkeley-Stanford-Carnegie-Mellon research triad. The Federal Communications Commission recently announced that it was hiring Jonathan Mayer, another member of the group, to act as its Chief Technologist. Like Soltani, his research has focused specifically on web-tracking technologies. And as with Soltani, Mayer’s research has led to major privacy cases, including an FTC consent decree against Google, concerning its use of tracking code on the Safari browser. While privacy isn’t an FCC focus, Mayer’s work on net neutrality could significantly affect many businesses.

Some business people may think that they don’t have to worry much about the FTC, a slimly staffed agency that has the impossible mission of policing “unfair or deceptive acts or practices” all over our huge country. But the FTC has been very active in the Internet privacy area, and its results, usually in the form of consent decrees, are reshaping how business is done on the Internet.

As two privacy experts have pointed out in a law review article titled “The FTC and the New Common Law of Privacy,” the FTC has become the primary regulator of privacy on the Internet, and its large and growing body of consent decrees have an effect far beyond the companies that are directly bound (which, moreover, includes such Internet giants as Google, Microsoft, Facebook, and Linkedin). The authors assert that contrary to the general belief that the United States has weak privacy regulation compared to Europe, that view is “becoming outdated as FTC privacy jurisdiction develops and thickens.”

Source:  http://www.thompsoncoburn.com/news-and-information/internet-law-twists-and-turns.aspx

Categorized in Internet Privacy

Google hopes to quickly make its virtual reality platform Daydream a mass-market product. "Our intention is to operate at Android scale, meaning hundreds of millions of users," senior product manager Brahim Elbouchikhi said at a session on monetizing Daydream apps at the Google I/O developers conference. "In a couple of years, we will have hundreds of millions of users on Daydream devices." And in order to keep those users entertained, Google wants app developers to build experiences that are long, highly interactive, and devoid of "freemium" mechanics that could break users' concentration.

Daydream was first announced yesterday, and Google launched a site for virtual reality developers this morning, so they can get started before the first Daydream-ready phones start rolling out this fall. The site covers creators of games and apps for both Daydream and Cardboard, the low-end VR platform that Google currently operates. But based on messaging at I/O, the overlap between those categories could be minimal. "Cardboard apps were about fun, snackable, short experiences, largely non-interactive," said Elbouchikhi. "Daydream apps are quite the opposite. They're about immersive content, longform, highly interactive." He cited research that suggested mobile VR users favor once-a-day sessions of 30 minutes or longer, in the comfort of their home — "nobody is wearing these headsets in the street, FYI."

Google hopes to quickly make its virtual reality platform Daydream a mass-market product. "Our intention is to operate at Android scale, meaning hundreds of millions of users," senior product manager Brahim Elbouchikhi said at a session on monetizing Daydream apps at the Google I/O developers conference. "In a couple of years, we will have hundreds of millions of users on Daydream devices." And in order to keep those users entertained, Google wants app developers to build experiences that are long, highly interactive, and devoid of "freemium" mechanics that could break users' concentration.

Daydream was first announced yesterday, and Google launched a site for virtual reality developers this morning, so they can get started before the first Daydream-ready phones start rolling out this fall. The site covers creators of games and apps for both Daydream and Cardboard, the low-end VR platform that Google currently operates. But based on messaging at I/O, the overlap between those categories could be minimal. "Cardboard apps were about fun, snackable, short experiences, largely non-interactive," said Elbouchikhi. "Daydream apps are quite the opposite. They're about immersive content, longform, highly interactive." He cited research that suggested mobile VR users favor once-a-day sessions of 30 minutes or longer, in the comfort of their home — "nobody is wearing these headsets in the street, FYI."


Elbouchikhi called out some of the bad habits that VR content can slip into, like substituting novelty for real interactivity or substance. "It's easy and tempting to say 'Oh, I'm just going to drop someone somewhere amazing," he said. "That is a great experience for 30 seconds. And then as soon as you achieve presence, you say, what do I want to do?" He also cautioned against adopting some strategies that have worked outside VR, like making users stop an experience to pay for microtransactions. "You're not going to want to have free-to-play mechanics, energy mechanics, time based mechanics. it's not going to work," he said. The Play Store will support in-app purchases, but he suggested that developers use this to offer a demo version of experiences for free, then let interested players buy full access once they're inside.

In Daydream, interactivity also means using the included motion-control remote, which all developers will be required to do. Google wants to do away with the standard method of interacting with Cardboard apps: staring at an option and selecting it by waiting or clicking a button. Instead, developers should treat the remote like a laser pointer, taking advantage of its internal sensors. "If you're bringing an app over from Cardboard, just using the controller as a clicker does not count as taking advantage of the controller," said VR design team member Alex Faaborg.

The session also impressed on developers that they have a responsibility to welcome people to a new medium, both by creating high-quality work and by helping ease them into the language of VR with things like experience intensity ratings. Among other things, developers should avoid, say, promising a relaxing beach experience and then attacking users with zombies, said Faaborg. "We don't want blatant surprises."

Source:  http://www.theverge.com/2016/5/19/11716154/google-daydream-android-vr-developers-guidelines

Categorized in Search Engine

As Google increasingly incorporates direct answers and other types of featured snippets into search results pages, columnist Andrew Shotland points out that businesses may want to get smarter about marking up their pages.

I have been noticing a lot of Google Answer Boxes showing up for queries with local intent these days. My recent post, Are You Doing Local Answers SEO? pointed out this fantastic result HomeAdvisor is getting for “replace furnace” queries:

Replace Your Damn Furnace Already

When clients get these local answer boxes, they often perform significantly better than regular #1 organic listings. In our opinion, these seem to be driven primarily by the following factors:

Domain/page authority

Text that appears to answer the query
Easy-to-understand page structures (broken up into sections that target specific queries, tables, prices and so on). Schema is not necessary here, but it helps.
For more of a deep dive on how these work, see Mark Traphagen’s excellent summary of last year’s SMX West panel on The Growth of Answers SEO.

But I am not here to talk about how great answer boxes are. I am here to talk about this result that recently popped up for “university of illinois apartments”:

Google Answer Boxes Gone Wild

At first glance, you might think this was a basic list of apartments for rent near the university. But if you look closer at the grid of data, you will see that it looks more like part of a calendar, which is pretty useless.

Many searchers may look past this and just click on the link, but this got me thinking that I really don’t want Google controlling what parts of my site get shown in the SERPs, particularly when it looks more like a Lack of Knowledge Box.

Think about if you had some unsavory user comments on the page that appeared in the answer box. Not only would this be a useless result, but it also might be damaging to your brand. The apartments result might make some searchers think ApartmentFinder is a bad site. So what went wrong here?

If you examine the ApartmentFinder URL in the answer box, you’ll notice that it doesn’t display any calendar in the UI. But if you search the code for “calendar,” you’ll see:

Calendar Code

This shows that there is some kind of calendaring app in a contact form.

As you can see from the next screen shot, the first Contact button that appears on the page is fairly close to the h1 that reads, “81 apartments for rent near the University of Illinois”:

Calendar Contact

And if you click on the Contact button, you get a pop-up form with a calendar:

Calendar Pop Up

It seems that Google is:

assuming the query deserves a more complex list of results than the standard SERP;
looking for the data closest to the strongest instance of the query text on the page (the h1); and
taking the first thing that looks like a table of data and putting it on the SERP. (I am sure it’s more complicated than that, but not too sure.)

So what can you do to avoid this?

Mark up your data with schema.org markup. This should give you the best chance of avoiding Google getting your info wrong. (On that note, the Schema.org site itself is kind of a drag to use. Try Google’s own site on Structured Data. It has all of the schema stuff you’ll need, plus some stuff that isn’t on Schema.org.)
Make sure the content you want to appear in answer boxes is closest to the on-page text that has the strongest match for the query — often the h1, but this could be a subheading, as well. If possible, make multiple subheadings that target different queries (e.g., “cheap apartments for rent,” “pet friendly apartments,” and so on) that might be the best results. For more on why this might be important, check out Dave Davies’ great take on the recent presentation from SMX West on how Google works by Google’s Paul Haahr. And while you’re at it, Rae Hoffman’s take on it is pretty great, too.
Put your content in a simple table on the page, or at least make it easy for Google to build its own. The fact that ApartmentFinder doesn’t mark any of its listings on that page with what type of listing it is makes it hard for Google to show a table of, say, one-bedroom apartments for rent at specific prices. Just adding “1BR” in text on each one-bedroom result may be enough to fix the problem.

Figuring out how to impact the answer box displays is akin to what we all went through trying to figure out how to influence what shows up for titles, descriptions and rich snippets. It can take a bit of trial and error, but when it works, it can be the answer to your SEO prayers.

Source:  http://searchengineland.com/dont-trust-google-structure-local-data-246585

Categorized in Search Engine

We might be using the “Deep Web” every day without calling it this way or even being aware of its existence. Simply filling in a web form enables us to access the Deep Web and retrieve data from a variety of databases, some of which are free, subscription-based or have major access costs attached. Any online data used for business purposes (not necessarily the same purposes for which it has been collected) can be risky, but not knowing what data there is out there about you and your company represents a significantly higher threat. On the other hand, a thorough, Deep Web search can greatly benefit companies researching competitors, potential employees, customers and business trends.

There are various types of data that can be accessed using intermediate technical skills and a few Deep Web Figure 1 - Layers of the Webresources: information customers share about the organisation and its products, information employees share about their jobs, products they are working on and company strategy/policies. More importantly, data aggregated from publicly-available databases can reveal costly, confidential information.

In terms of resources, an initial Deep Web exploration does not imply major investment or require a team of highly skilled IT developers. Freely available tools such as DWT’s Biznar represent an excellent starting point to explore a variety of authoritative business databases for a real-time search. Other subject-specific publicly available search portals include Mednar for medical researchers or WorldWideScience.org for scientific information. This kind of exploration can be learnt and done in-house with minimum resources and can save your company many hours of online searching using traditional search engines. For on-demand searches, constant monitoring of specific databases and alerts, commercial applications such as those powered by Explorit Everywhere! can facilitate the use of a targeted Deep Web search strategy, advise on the content that needs monitoring and provide a unified access point to all the necessary data sources.

Going back to the types of data that might be made visible through Deep Web resources without its owner being aware, currently, intellectual property on the Deep Web is a matter under scrutiny. While traditional search engines might only take into account the big picture, trying to match your search terms in the title, abstract and key words; Deep Web tools can perform fully comprehensive searches. Apart from monitoring your own patents, inventions and discoveries online, this could save your company money by preventing you from becoming a litigation target after mistakenly infringing on other company’s intellectual property rights.

The ubiquitous availability of social media applications and people’s urge to share data have led to extensive concerns in terms of how much data about your company are your employees and customers disclosing. Social media enables the creation of enormous amounts of data which is not easily to search and interpret. Most of this data is stored in dynamical databases which are not indexed by traditional Surface Web search engines. This means that they are part of the Deep Web and sometimes only protected by the individual’s privacy settings. With the right Deep Web tools, anyone can monitor the details that customers share about the products, purchasing experience and the customer’s general attitude towards the organisation. More than monitoring various data sources in isolation, aggregating them can reveal new information or give a renewed meaning to the existing (most of the time, publicly-available) one. Cautiousness is advised when aggregating data that has been collected by another organisation as its processing might breach data protection regulation

On the negative side of things, sheltered by a fake username and encouraged by a number of followers, anyone can express an opinion about the organisation on social media which is going to demand a sum of resources to trace, challenge or prove wrong. More dangerously, the ease of creating and sharing content challenges the employees’ obligation to comply with the company’s non-disclosure policies, making social media sites an ideal source of data about company difficulties, new products or future strategy. Constant monitoring and awareness of these breaches can help the company reinforce policies and put in place contingency plans in order to contain the damages.

Even if you feel that traditional online research tools provide you with all the data necessary to your business activities, Deep Web datasources can no longer be ignored. The Deep Web, and its renowned subset, the Dark Web, is significantly larger compared to the Surface Web and due to its vastness, its content cannot always be monitored or regulated. Being aware of its existence and acquiring technology to monitor your presence and the data about you on it, or to monitor your competitors, might prove beneficial in a market where competitive intelligence is a critical component to success.

Source:  http://www.deepwebtech.com/2015/09/deep-web-for-enterprises-what-you-can-learn-about-competitors-and-customers-and-vice-versa/

Categorized in Deep Web

It is clear that search has not changed much in the past 20 years. Back in the 1990s, enterprise search first included indexing multiple, heterogeneous data sets into a single search experience, with full document-level security, using search engines and this pretty much sums up the situation today.

A survey conducted by SearchYourCloud, a search and security company, revealed that a third of respondents spend between five and 25 minutes searching every time they want to find a document, while only one in five searches is correct the first time. The search for corporate information is eating into workplace productivity. Only 20 per cent of respondents reported first time successful searches. Other key findings from the survey include that it takes workers up to eight searches to find the right document and information, according to 80 per cent of respondents.

Verity began offering search in the late 1980s until it was acquired by Autonomy in 2006. Previously, Verity provided unified results from multiple, simultaneous searches from the desktop to the enterprise. While Verity took a semantic approach to search, Autonomy took a statistical approach to understand statistical relationships between terms. In 2011, HP acquired Autonomy to bolster its search and analytics business. Also in 2011, Oracle, which launched Oracle Secure Search in 2006, acquired Endeca. In order to improve SharePoint search capabilities, Microsoft acquired FAST Search and Development.

While it may seem like the search industry grew in this time period with these new companies emerging, it is actually the opposite. Rather, Autonomy and Microsoft came along and made search more about consultancy and less about usability or actual results. It took what was a vital part of the search world and combined services to enhance it for general enterprise, not make it easier to search your information.

Nevertheless, in the past five years, the demand for a secure search that delivers results quickly has grown, in part, due to Big Data, mobility and cloud services. Big Data encompasses the massive amounts of data that is stored in databases, spreadsheets, emails, reports, etc. and generally needs to be searched separately. With mobile and cloud services, users and their devices are more dispersed with important files stored on numerous devices, on-premises and in the cloud. The problem is made even worse because most data is collected into large repositories, which are slower and more complicated to search, which results in companies’ having a big “pile” of data, whether in a database, series of Excels or as stored photos, emails and other files, that are unusable and consequently reduce productivity.

With the advent of federated search, the ability to search across multiple repositories has improved. Moreover, with federated de-duplicated results, users do not receive thousands of irrelevant documents or emails. Users can simultaneously search across applications. It is best to take a non-repository processing approach and keep the existing data silos separate. A large repository can be kludgy with inherent security risks and to combine multiple silos may create problems in reconciling different processing power and security levels.

To find needed information, an enterprise search tool must deliver the exact results in a timely manner and not a boatload of assorted data that does not deliver the data the person actually needs. Unlike a Google search that finds what is considered to be most relevant based on visits and cookies and the more results the better, enterprise search should deliver information that is relevant to only that task at hand – payroll, results, competitive information, etc. – be it in Excel, SharePoint or HANA database.

Furthermore, because this type of information is typically private data, the delivered results need to be available to only those who need access while not allowing those who wish to use the information without permission. Consequently, it is important to determine which question you are asking the data to answer. Rapid response is also key, which is another reason that search needs to find the data so you can quickly act on it. Whether it is to solve a client issue, approve overtime and/or commissions, present information to the board of directors, or improve the sales process, it’s critical to have instant access to accurate search results to keep enterprise productivity.

The good news is that search is evolving, albeit slowly. Developers of enterprise search are now coming to realise that it must:

Bring needed answers and/or files and not merely a list of results.
Learn from analytics and past searches.
Be able to search seamlessly across multiple repositories.

Deliver results blindingly fast

While search has not improved much in the past 20 years, there are new federated searches that can securely find the right document at the right time. These new types of search also add security to protect privacy of files as they traverse networks.

Things are looking up in the search world, so stay tuned on the journey.

Source: http://www.itproportal.com/2016/02/28/the-search-continues-history-of-searchs-unsatisfactory-progress/

Categorized in Research Methods

Social media sites and privacy are somewhat inherently at odds. After all, the point of social media is to share your life with the world; the very opposite of maintaining your privacy. Still, there is a difference between sharing parts of your life and all of it. Thus, a number of legal lines have been drawn in the sand regarding privacy on social media sites.


While the sharing of social media may help us to feel closer with friends and family, even if they are far away, social media can create a number of problems, too. While pictures from a drunken night out with friends or soaking up sun in a skimpy bikini on the beach might be totally fine to share with your friends, you may not want employers or coworkers finding them. Similarly, you almost certainly do not want the world knowing your passwords or private messages with other people.


Until recently, there has been very little to protect those who either intentionally or accidentally share too much on social media. Prior to 2013, lawmakers were more concerned with gaining access to information on social media than protecting it from others. On the other hand, other nations around the world recognized the potential risks of social media much earlier than in the US and began acting laws to protect privacy much sooner. Still today, in the United States, only certain classes of information enjoy any sort of protection under federal law. They generally relate to things like financial transactions, health care records, and information about kids under the age of 13. Nearly everything else still remains fair game, provided it is obtained through legally acceptable means (i.e., not by virtue of a hacking attack, fraud, or other illegal activity).


Traditionally, two bodies have acted to protect the rights of those online: the Federal Trade Commission (FTC) and state attorneys general. However, throughout the development and rise of popularity of social media sites, these bodies have only acted to protect published privacy policies. If the site either claimed not to collect certain information, or merely omitted it from disclosures, the site itself might be subject to prosecution, but generally those third parties gaining access to that information legally were not. However, social media sites with vague privacy policies that did not clearly disclose which information it gathered and whether it sold that information, or sites that disclosed their practices of gathering and selling information (even if the disclosure was hard to find) were generally not subject to any sort of enforcement action.


Recently, though, the FTC has changed its philosophy on these matters, using its powers to enforce privacy policies on social media sites to force many social media sites into both monetary settlements and long-term consent order permitting the FTC to exercise greater control over the site’s policies.


States have had somewhat different experiences with social media laws. Attorneys general have had mixed results trying to enforce privacy policies, and even less success when trying to strong arm social media sites into offering tighter protections of user information. More than 45 jurisdictions around the US have some sort of data breach notice law requiring companies to disclose intentional or accidental disclosures of information. While these laws would generally encompass social media sites, as well, they are often excluded by special provisions because they are specifically designed to allow the users to share personal information with the larger public. Thus, many state laws are largely ineffectual when it comes to protecting one’s privacy rights under social media sites.


As social media sites grow in popularity and become increasingly central to the lives of Americans who use them, privacy intrusions have similarly grown increasingly common. Unfortunately, as is often the case with new technologies, the laws relating to those technologies lag years if not decades behind the developments themselves. States, with smaller legislatures and more agile means of enacting laws, are leading the way in creating new regulations, but many of these may suffer under the scrutiny of judicial review (particularly if they contradict existing federal laws). Additional legal changes will likely take place in the coming months and years, but true privacy on social media is likely not going to occur in the near future.


In the meantime, the best way to avoid privacy concerns through social media sites is to avoid using them. Of course, that is rather like suggesting that the best way to avoid a wiretap is to not speak on the phone, so odds are good that you will continue using social media and accepting the risk of somewhat eroded privacy. However, if you do feel that you have experienced a breach of your privacy in violation of a site’s privacy policy, consider speaking with your state’s attorney general or reporting the situation to the FTC. You may also want to consult with an attorney. You can find a lawyer experienced in internet privacy laws by visiting HG.org’s attorney search feature.


Source:  https://www.hg.org/article.asp?id=36795


Categorized in Internet Privacy
Page 3 of 4


World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.