fbpx

DuckDuckGo has launched a new browser extension for Chrome that will prevent FLoC, a new tracking technique used by Google to support web advertising without identifying users.

Privacy browser DuckDuckGo has launched a new extension for Chrome that's designed to block Google's new algorithm for tracking users' browsing activity for ad selection.

DuckDuckGo's new browser extension blocks FLoC (Federated Learning of Cohorts), which Google introduced to users in March as a replacement for third-party cookies that track individuals across the web.

FLoC is proposed as a method for offering greater anonymity for users by concealing their browsing activity within a group (or 'cohort') of other anonymized users with similar browsing habits. In doing so, advertisers can offer up relevant ads to cohorts of several thousands of users with similar interests, while the identity of individual users remains hidden.

But some see problems with this proposal. While the idea of 'hiding' individuals within a group sounds like better news for user privacy, websites can still target users with ads based on their assigned 'FloC ID', which essentially offers up a summary of interests and demographic information based on a user's browsing habits. What's more, websites can theoretically still track individuals, owing to the fact that every time you visit a website, it records your IP address.

This is where DuckDuckGo's new tool comes in. Currently, FLoC is only being used within Google Chrome, and while it has not yet been rolled out en masse, Google has announced plans to begin trialing FLoC-based cohorts with advertisers starting in Q2.

The FLoC-blocking feature is included in version 2021.4.8 and newer of the DuckDuckGo extension. DuckDuckGo Search is now also configured to opt out of FLoC.

"We're disappointed that, despite the many publicly voiced concerns with FLoC that have not yet been addressed, Google is already forcing FLoC upon users without explicitly asking them to opt-in," DuckDuckGo said in a blog post.

"We're nevertheless committed and will continue to do our part to deliver on our vision of raising the standard of trust online."

Google's Privacy Sandbox

Google has been working on a replacement for third-party cookies for some time. As detailed in a post on its Chromium Blog in January this year, FLoCs are one of a handful of methods the search giant is looking at as part of its 'Privacy Sandbox' for the web.

The company has claimed that FLoC algorithms are at least 95% as effective as cookie-based advertising when it comes to helping advertisers target users, which it says is "great news for users, publishers, and advertisers".

Chetna Bindra, Google's Group Product Manager for User Trust and Privacy, suggested in a blog post in January that tools like FLoC and other privacy-preserving methods proposed as part of Google's Privacy Sandbox would enhance fraud protection and prevent 'fingerprinting', whereby data from a user's browser is gathered to create a profile.

Bindra labeled FLoC a "privacy-first alternative to third-party cookies" that "effectively hides individuals 'in the crowd' and uses on-device processing to keep a person's web history private on the browser."

Yet others have pointed out that FLoC doesn't eliminate the threat of fingerprinting entirely. As well as the possibility of websites identifying users through a combination of their cohort ID and IP address, cohort IDs will also be accessible by any third-party trackers within the websites that users visit.

Google has said that it will work to ensure that "sensitive interest categories" like religion, identity, sexual interests, race, and medical or personal issues can't be used to target ads to users or to promote advertisers' products or services.

The Electronic Frontier Foundation (EFF), a digital rights group, argues that these precautions don't go far enough. "The proposal rests on the assumption that people in 'sensitive categories' will visit specific 'sensitive' websites, and that people who aren't in those groups will not visit said sites," it said in a blog post.

"But behavior correlates with demographics in unintuitive ways. It's highly likely that certain demographics are going to visit a different subset of the web than other demographics are, and that such behavior will not be captured by Google's 'sensitive sites' framing," the EFF added.

There are other methods for blocking FLoC, as laid out by DuckDuckGo. Unsurprisingly, the main one involves bypassing Google Chrome entirely – bear in mind, of course, that DuckDuckGo has its own competing browser in the game.

Users can also remain logged out of their Google account; switch off ad personalization within the Google Ad Settings; avoid syncing their search history data with Chrome; and disable Web & App Activity within Google's Activity Controls.

Google plans to roll out updated activity controls with the incoming Chrome 90 release.

[Source: This article was published in techrepublic.com By Owen Hughes - Uploaded by the Association Member: Jasper Solander]

Categorized in Search Engine

According to new research from We Are Social and Hootsuite, there are 3.8 billion social media users around the world. 

“Nearly 60% of the world’s population is already online,” said Simon Kemp, Chief Analyst at DataReportal (who produced the research), “The latest trends suggest that more than half of the world’s total population will use social media by the middle of this year.”

With this type of reach, social media use is essential for most journalists, both in terms of story gathering and distribution. It’s a fast moving space, so journalists need to be alive to the challenges – and opportunities – that this space affords. 

Below are six emerging issues and considerations for journalists in 2020.

(1) Mis- and disinformation

A fundamental challenge for social media users is that mis- and disinformation typically looks exactly the same as real news in your feed. As a result, at first glance, it can be very hard for journalists and non-journalists alike, to discern fact from fiction. 

As a result, we all need to “think before we tweet,” check the provenance of material we are sharing or using in our work, and be aware of the latest techniques being used to spread misinformation, conspiracy theories, and partisan agendas. 

To address this, these 11 tips from National Public Radio’s On The Media are also a useful starting point. Digging deeper, I highly recommend the training materials and the newsletters produced by First Draft, a global nonprofit specializing in disinformation and other media trends. These are valuable resources that all journalists should be familiar with. 

(2) Weaponization of social media

The spread of mis- and disinformation can be accidental, for example when people share stories from satirical websites like The Onion, but assume they’re true. 

Who can blame them? Sometimes the truth is stranger than fiction. Remember those stories from last year about President Trump wanting to buy Greenland? You couldn’t make this stuff up!

However, sharing this information isn’t always an accident. We also see the weaponization of social media by state actors and opportunists with the intention of influencing what we see, and our view of the world around us. Driven by financial, as well as ideological motives, this type of online activity is only going to increase.

That means news consumers — and producers — need to be more media literate than ever. 

As journalists, we need to be able to interrogate sources in new and more sophisticated ways. These requirements will only increase as deep fakes and other manipulation techniques become more advanced. 

cindy Otis

(3) Privacy concerns 

Journalists and anyone using social networks need to be cognizant about the potential repercussions of what they say online. These spaces aren’t safe from onlookers, and actions taken in these spaces are not without consequence.

In Egypt and the United Arab Emirates, for example, people have been imprisoned for online posts and WhatsApp messages

And it’s not just what you personally say online, you can also be impacted by association. 

Last year, incoming Harvard freshman Ismail Ajjawi, a Palestinian from Lebanon, was initially denied entry into the United States, purportedly due to social media posts from his Facebook friends that expressed political opposition to the U.S. Ajjawai was eventually allowed into the country and able to start his classes.

(4) The move to closed networks

Because of these trends, we are witnessing a rise of self-censorship, as consumers become increasingly wary about what they say and post online. 

In the U.S., research from Pew Research Center highlighted a desire to avoid contentious conversations or expressing opinions that may be in the minority. 

Elsewhere, conversations are moving to closed networks like WhatsApp groups and Telegram channels due to their encryption and a perception that these channels can bypass digital eavesdropping. In recognition of this, Facebook announced a pivot to privacy last year. 

A key challenge for journalists is that conversations move from the open internet to closed spaces. Being able to access these discussions is not easy, and if you do gain access, do you identify as a journalist? Does this skew the conversation and, in some cases, create risks to your personal safety? Work by former BBC Social Media Editor Mark Frankel offers a good starting point to these issues and considerations. 

(5) Filter bubbles 

Tech platforms are designed to show us more of what they think we like, rather than what we need. 

Essentially your news feed on Twitter, Facebook, or Instagram is a giant recommendation machine. These recommendations are based on what the platform thinks you want, which makes it difficult to be introduced to views that differ from our own. 

For journalists, that means we have to remind ourselves that online discussions are not representative of populations at large and that they are deeply filtered by both platform algorithms and what people choose to post. 

As storytellers, we need to work hard to be exposed to different points of view. Social media is a tool to help us in our work, but traditional methods of identifying and building relationships with sources remain just as pertinent.

This is especially true in countries and regions where many voices and experiences go unheard on social media, either because people do not have access to the technology, or they do not understand how to use it. 

Social Media is a complement to tools and techniques that journalists have always used, but not a substitute for it.

(6) Where to invest your time and energy 

“The world’s internet users will spend a cumulative 1.25 billion years online in 2020,” said Simon Kemp, “with more than one-third of that time spent using social media.” 

One final challenge for journalists and media organizations is understanding where audiences are spending that time — and the implications of this. 

The average internet user had an average of 8.5 social media accounts, according to data from GlobalWebIndex in 2018, up from 4.8 social media accounts in 2014. However, the way users divide time between these platforms varies. Although Facebook is the overall market leader, time spent on different networks changes by demographic and country. 

Moreover, because each platform has its own characteristics, social media strategies that work for one platform don’t necessarily work for another. 

As a result, diving into data from DataReportalGlobalWebIndex, the Digital News Report, and other sources is vital if you’re to understand local trends and their implications. 

In a time-pressured newsroom you cannot be everywhere online, so alongside the wider trends outlined in this piece, determining where your audience is — and what they want from their time on a given platform — is essential for your social media success.

image1

[Source: This article was published in ijnet.org By Damian Radcliffe - Uploaded by the Association Member: James Gill]
Categorized in Investigative Research

Identity theft is such a growing problem that it’s become almost routine—Marriott, MyFitness Pal, LinkedIn, Zynga, and even Equifax (of all places) have had high-profile online data breaches in recent years, affecting hundreds of millions of people. To help combat this problem, Experian and other companies are marketing “dark web scans” to prevent data breaches. But what is a dark web scan, and do you need it?

The dark web, explained 

The dark web is a large, hidden network of websites not indexed or found on typical search engines. It’s also a hub of illegal activity, including the buying and selling of stolen financial and personal information. If your information ends up on dark web sites after a data breach, an identity thief could use that data to open credit cards, take out loans, or withdraw money from your bank account.

How dark web scans work 

A dark scan will scan the dark web to see if medical identification info, bank account numbers, and Social Security numbers are being shared. If you get positive results, the dark scan service will suggest that you change your passwords, use stronger ones, or put a credit freeze on your credit profiles with the three major bureaus (Experian, Equifax, and TransUnion). A negative search result doesn’t necessarily mean you haven’t had a data breach, of course, as there’s no way for any company to search the entirety of the dark web.

Many of these services offer you a free scan, but that only covers certain information like phone numbers, passwords, and Social Security numbers. If you want to set up alerts, or search for other information like bank account numbers, passports, or your driver’s license, or have access to credit reports (which are already free) these services will typically charge a monthly fee (Experian offers this service for $9.99 per month after a 30-day free trial).

Is a dark web scan worth paying for?

In an interview for NBC News’ Better, Neal O’Farrell, executive director of the Identify Theft Council, called dark web scanning “a smoke and mirrors deal” that doesn’t “go to the cause of the problem, which is vigilance, awareness, taking care of your own personal information, freezing your credit.”

[Source: This article was published in twocents.lifehacker.com By Mike Winters - Uploaded by the Association Member: Eric Beaudoin]

Categorized in Internet Privacy

LastPass' new Security Dashboard gives users a complete picture of their online security

Knowing if your passwords have been leaked online is an important step to protecting your online accounts which is why LastPass has unveiled a new Security Dashboard which provides end users with a complete overview of the security of their online accounts.

The company's new Security Dashboard builds on last year's LastPass Security Challenge, which analyzed users' stored passwords and provided a score based on how secure they were, by adding dark web monitoring. The new feature is available to LastPass Premium, Families and Business customers and it proactively watches for breach activity and alerts users when they need to take action.

In addition to showing users their weak and reused passwords, the new Security Dashboard now gives all LastPass users a complete picture of their online security to help them regain control over their digital life and know that their accounts are protected.

Dark web monitoring

According to a recent survey of more than 3,000 global consumers conducted by LastPass, 40 percent of users don't know what the dark web is. The majority (86%) of those surveyed claimed they have no way of even knowing if their information is on the dark web.

LastPass' new dark web monitoring feature proactively checks email addresses and usernames against Enzoic’s database of breached credentials. If an email address is found in this 3rd party database, users will be notified immediately via email and by a message in their LastPass Security Dashboard. Users will then be prompted to update the password for that compromised account.

Vice president of product management, IAM at LogMeIn, Dan DeMichele explained why LastPass decided to add dark web monitoring to its password manager in a press release, saying:

“It’s extremely important to be informed of ways to protect your identity if your login, financial or personal information is compromised. Adding dark web monitoring and alerting into our Security Dashboard was a no brainer for us. LastPass already takes care of your passwords, and now you can extend that protection to more parts of your digital life. LastPass is now equipped to truly be your home for managing your online security – making it simple to take action and stay safe in an increasingly digital world. With LastPass all your critical information is safe so you can access it whenever and wherever you need to.”

[Source: This article was published in techradar.com By Anthony Spadafora - Uploaded by the Association Member: Anna K. Sasaki]

Categorized in Internet Privacy

Privacy on the internet is very important for many users, to achieve this they resort to TOR or a VPN. ButWhich is better? What are the advantages of using one or the other? In today’s article we are going to see in detail all the advantages and disadvantages that both have.

If we talk about internet privacy, generally the common people do not pay much attention to it. They have all their data in their Google accounts, they log in anywhere, their social networks are not configured to protect their privacy.

We could be giving examples all day. But what can happen if I expose my data in this way? The simple answer? Anything.

From attacks by cybercriminals, to the surveillance of different government agencies, limitation of access to websites, etc. Anything can happen, since information is one of the most powerful tools you can give to a company or individual.

When we surf the internet in a normal way, so to speak, we are never doing it anonymously. Even the incognito mode of the most popular browsers is not an effective method to achieve this.

It is precisely by this method that many users decide use a VPN or browse through Tor. The two systems are very good for browsing the internet anonymously, although their differences are notorious and we will mention them below.

Main advantages of using a VPN network

Explaining the operation of a VPN network is quite simple: it adds a private network to our connection. In short, the VPN network takes our connection, takes care encrypt it and then send it to the destination server.

The way it works is too simple, at least in a basic way. Instead of directly entering a website, we first go through an intermediate server and then enter the destination site through this intermediate server.

Using a VPN network is highly recommended for those who connect to the internet from public WiFi networks. Also, one of the great advantages it has is that you can camouflage your real location.

Let’s pretend you are in Argentina, but the VPN server works in the United States. All the websites you access will believe that you are precisely in the United States. Which comes in handy to bypass any kind of content blocking on the internet.

Main advantages of using Tor

The idea of ​​Tor is to keep the user anonymous at all times when browsing the internet. To get it, our information passes between a large number of nodes before we can see the website. In this way, it is not possible to determine our location and our connection information such as IP.

Although, it is a reliable system that improves our privacy on the internet. In reality, browsing completely anonymously is not possible and neither is it in Tor. Since, in the final node the data is decrypted to be able to access the site in question. Yes we are exposed although it is much more complicated for them to find out something about us. Tor takes care of that.

When we use Tor, we are much more secure than when using any common browser. But you must bear in mind that it is not an infallible system. Although we will be much safer when visiting websites with secure connections (HTTPS) than in sites that do not have encryption activated.

A very important extra that you should always keep in mind is that: if the website is not secure, that is, it is not encrypted (HTTPS), do not enter any kind of information to it. By this we mean login information, email, bank accounts, credit cards, etc.

Tor vs VPN Which one should you use?

The first thing you should know is that most quality VPNs are paid. In the case of Tor, this is totally free and we will not have to pay absolutely anything at any time.

Another thing to keep in mind is that VPN services do store user data for obvious reasons. Anonymity is lost this way, especially if they had to face the law.

In the case of Tor this does not happen, the only problem with the latter is that the browsing speed is not exactly the bestregardless of the speed of your connection.

The bottom line is pretty simple: If you are an average user who is concerned about how companies use your private data, then it is best to use a VPN network. This will be faster than Tor which will allow us to consume multimedia content without any kind of problem.

In the case of Tor, it is used for those people who need a lot of anonymity on the internet. It is something quite common that we see in people who have to face governments. Like the case of different journalists in Venezuela, to give an example.

The differences between Tor and a VPN network are quite clear. Each one is used for something slightly different, the two promise anonymity. But you must bear in mind that long-term and total anonymity on the internet does not exist.

[Source: This article was published in explica.co - Uploaded by the Association Member: Anthony Frank] 

Categorized in Internet Privacy

“For me, trust has to be earned. It’s not something that can be demanded or pulled out of a drawer and handed over. And the more government or the business sector shows genuine regard and respect for peoples’ privacy in their actions, as well as in their word and policies, the more that trust will come into being.” Dr. Anita L. Allen

Dr. Anita Allen serves as Vice Provost for Faculty and Henry R. Silverman Professor of Law and Philosophy at the University of Pennsylvania. Dr. Allen is a renowned expert in the areas of privacy, data protection, ethics, bioethics, and higher education, having authored the first casebook on privacy law and has been awarded numerous accolades and fellowships for her work. She earned her JD from Harvard and both her Ph.D. and master’s in philosophy from the University of Michigan. I had the opportunity to speak with her recently about her illustrious career, the origins of American privacy law and her predictions about the information age.

Q: Dr. Allen, a few years ago you spoke to the Aspen Institute and offered a prediction that “our grandchildren will resurrect privacy from a shallow grave just in time to secure the freedom, fairness, democracy, and dignity we all value… a longing for solitude and independence of mind and confidentiality…” Do you still feel that way, and if so, what will be the motivating factors for reclaiming those sacred principles?

 

A: Yes, I believe that very hopeful prediction will come true because there’s an increasing sense in the general public of the extent to which we have perhaps unwittingly ceded our privacy controls to the corporate sector, and in addition to that, to the government. I think the Facebook problems that had been so much in the news around Cambridge Analytica have made us sensitive and aware of the fact that we are, by simply doing things we enjoy, like communicating with friends on social media, putting our lives in the hands of strangers.

And so, these kinds of disclosures, whether they’re going to be on Facebook or some other social media business, are going to drive the next generation to be more cautious. They’ll be circumspect about how they manage their personal information, leading to, I hope, eventually, a redoubled effort to ensure our laws and policies are respectful of personal privacy.

Q: Perhaps the next generation heeds the wisdom of their elders and avoids the career pitfalls and reputational consequences of exposing too much on the internet?

A: I do think that’s it as well. Your original question was about my prediction that the future would see a restoration of concern about privacy. I believe that, yes, as experience shows the younger generation just what the consequences are of living your life in the public view and there will be a turnaround to some extent. To get people to focus on what they have to lose. It’s not just that you could lose job opportunities. You could lose school admissions. You could lose relationship opportunities and the ability to find the right partner because your reputation is so horrible on social media.

All of those consequences are causing people to be a little more reserved. It may lead to a big turnaround when people finally get enough control over their understanding of those consequences that they activate their political and governmental institutions to do better by them.

Q: While our right to privacy isn’t explicitly stated in the U.S. Constitution, it’s reasonably inferred from the language in the amendments. Yet today, “the right to be forgotten” is an uphill battle. Some bad actors brazenly disregard a “right to be left alone,” as defined by Justice Brandeis in 1890. Is legislation insufficient to protect privacy in the Information Age, or is the fault on the part of law enforcement and the courts?

A: I’ve had the distinct pleasure to follow developments in privacy law pretty carefully for the last 20 years, now approaching 30, and am the author or co-author of numerous textbooks on the right to privacy in the law, and so I’m familiar with the legal landscape. I can say from that familiarity that the measures we have in place right now are not adequate. It’s because the vast majority of our privacy laws were written literally before the internet, and in some cases in the late 1980s or early 1990s or early 2000s as the world was vastly evolving. So yes, we do need to go back and refresh our electronic communications and children’s internet privacy laws. We need to rethink our health privacy laws constantly. And all of our privacy laws need to be updated to reflect existing practices and technologies.

The right to be forgotten, which is a right described today as a new right created by the power of Google, is an old right that goes back to the beginning of privacy law. Even in the early 20th century, people were concerned about whether or not dated, but true information about people could be republished. So, it’s not a new question, but it has a new shape. It would be wonderful if our laws and our common law could be rewritten so that the contemporary versions of old problems, and completely new issues brought on by global technologies, could be rethought in light of current realities.

Q: The Fourth Amendment to the Constitution was intended to protect Americans from warrantless search and seizure. However, for much of our history, citizens have observed as surveillance has become politically charged and easily abused. How would our founders balance the need for privacy, national security, and the rule of law today?

A: The fourth amendment is an amazing provision that protects persons from a warrantless search and seizure. It was designed to protect peoples’ correspondence, letters, papers, as well as business documents from disclosure without a warrant. The idea of the government collecting or disclosing sensitive personal information about us was the same then as it is now. The fact that it’s much more efficient to collect information could be described as almost a legal technicality as opposed to a fundamental shift.

I think that while the founding generation couldn’t imagine the fastest computers we all have on our wrists and our desktops today, they could understand entirely the idea that a person’s thoughts and conduct would be placed under government scrutiny. They could see that people would be punished by virtue of government taking advantage of access to documents never intended for them to see. So, I think they could very much appreciate the problem and why it’s so important that we do something to restore some sense of balance between the state and the individual.

Q: Then, those amendments perhaps anticipated some of today’s challenges?

A: Sure. Not in the abstract, but think of it in the concrete. If we go back to the 18th and 19th centuries, you will find some theorists speculating that someday there will be new inventions that will raise these types of issues. Warren and Brandeis talked specifically about new inventions and business methods. So, it’s never been far from the imagination of our legal minds that more opportunities would come through technology. They anticipated technologies that would do the kinds of things once only done with pen and paper, things that can now be done in cars and with computers. It’s a structurally identical problem. And so, while I do think our laws could be easily updated, including our constitutional laws, the constitutional principles are beautiful in part because fundamentally they do continue to apply even though times have changed quite a bit.

Some of the constitutional languages we find in other countries around ideas like human dignity, which is now applied to privacy regulations, shows that, to some extent, very general constitutional language can be put to other purposes.

Q: In a speech to the 40th International Data Protection and Privacy Commissioners Conference, you posited that “Every person in every professional relationship, every financial transaction and every democratic institution thrives on trust. Openly embracing ethical standards and consistently living up to them remains the most reliable ways individuals and businesses can earn the respect upon which all else depends.” How do you facilitate trust, ethics, and morality in societies that have lost confidence in the authority of their institutions and have even begun to question their legitimacy?

A: For me, trust has to be earned. It’s not something that can be demanded or pulled out of a drawer and handed over. Unfortunately, the more draconian and unreasonable state actors behave respecting people’s privacy, the less people will be able to generate the kind of trust that’s needed. And the more government or the business sector shows genuine regard and respect for peoples’ privacy in their actions, as well as in their word and policies, the more that trust will come into being.

I think that people have to begin to act in ways that make trust possible. I have to act in ways that make trust possible by behaving respectfully towards my neighbors, my family members, and my colleagues at work, and they the same toward me. The businesses that we deal with have to act in ways that are suggestive of respect for their customers and their vendors. Up and down the chain. That’s what I think. There’s no magic formula, but I do think there’s some room for conversation for education in schools, in religious organizations, in NGOs, and policy bodies. There is room for conversations that enable people to find discourses about privacy, confidentiality, data protection that can be used when people demonstrate that they want to begin to talk together about the importance of respect for these standards.

It’s surprising to me how often I’m asked to define privacy or define data protection. When we’re at the point where experts in the field have to be asked to give definitions of key concepts, we’re, of course, at a point where it’s going to be hard to have conversations that can develop trust around these ideas. That’s because people are not always even talking about the same thing. Or they don’t even know what to talk about under the rubric. We’re in the very early days of being able to generate trust around data protection, artificial intelligence, and the like because it’s just too new.

Q: The technology is new, but the principles are almost ancient, aren’t they?

A: Exactly. If we have clear conceptions about what we’re concerned about, whether its data protection or what we mean by artificial intelligence, then those ancient principles can be applied to new situations effectively.

Q: In a world where people have a little less shame about conduct, doesn’t that somehow impact the general population’s view of the exploitation of our data?

A: It seems to me we have entered a phase where there’s less shame, but a lot of that’s OK because I think we can all agree that maybe in the past, we were a bit too ashamed of our sexuality, of our opinions. Being able to express ourselves freely is a good thing. I guess I’m not sure yet on where we are going because I’m thinking about, even like 50 years ago, when it would have been seen as uncouth to go out in public without your hat and gloves. We have to be careful that we don’t think that everything that happens that’s revealing is necessarily wrong in some absolute sense.

 

It’s different to be sure. But what’s a matter of not wearing your hat and gloves, and what’s a matter of demeaning yourself? I certainly have been a strong advocate for moralizing about privacy and trying to get people to be more reserved and less willing to disclose when it comes to demeaning oneself. And I constantly use the example of Anthony Weiner as someone who, in public life, went too far, and not only disclosed but demeaned himself in the process. We do want to take precautions against that. But if it’s just a matter of, “we used to wear white gloves to Sunday school, and now we don’t…” If that’s what we’re talking about, then it’s not that important.

Q: You studied dance in college and then practiced law after graduating from Harvard, but ultimately decided to dedicate your career to higher education, writing, and consulting. What inspired you to pursue an academic career, and what would you say are the lasting rewards?

A: I think a love of reading and ideas guided my career. Reading, writing, and ideas, and independence governed my choices. As an academic, I get to be far freer than many employees are. I get to write what I want to write, to think about what I want to think, and to teach and to engage people in ideas, in university, and outside the university. Those things governed my choices.

I loved being a practicing lawyer, but you have to think about and deal with whatever problems the clients bring to you. You don’t always have that freedom of choice of topic to focus on. Then when it comes to things like dance or the arts, well, I love the arts, but I think I’ve always felt a little frustrated about the inability to make writing and debate sort of central to those activities. I think I am more of a person of the mind than a person of the body ultimately.

[Source: This article was published in cpomagazine.com By RAFAEL MOSCATEL - Uploaded by the Association Member: Grace Irwin]

Categorized in Internet Ethics

As we close out 2019, we at Security Boulevard wanted to highlight the five most popular articles of the year. Following is the fifth in our weeklong series of the Best of 2019.

Privacy. We all know what it is, but in today’s fully connected society can anyone actually have it?

For many years, it seemed the answer was no. We didn’t care about privacy. We were so enamored with Web 2.0, the growth of smartphones, GPS satnav, instant updates from our friends and the like that we seemed to not care about privacy. But while industry professionals argued the company was collecting too much private information, Facebook CEO Mark Zuckerberg understood the vast majority of Facebook users were not as concerned. He said in a 2011 Charlie Rose interview, “So the question isn’t what do we want to know about people. It’s what do people want to tell about themselves?”

In the past, it would be perfectly normal for a private company to collect personal, sensitive data in exchange for free services. Further, privacy advocates were almost criticized for being alarmist and unrealistic. Reflecting this position, Scott McNealy, then-CEO of Sun Micro­systems, infamously said at the turn of the millennium, “You have zero privacy anyway. Get over it.”

And for another decade or two, we did. Privacy concerns were debated; however, serious action on the part of corporations and governments seemed moot. Ten years ago, the Payment Card Industry Security Standards Council had the only meaningful data security standard, ostensibly imposed by payment card issuers against processors and users to avoid fraud.

Our attitudes have shifted since then. Expecting data privacy is now seen by society as perfectly normal. We are thinking about digital privacy like we did about personal privacy in the ’60s, before the era of hand-held computers.

So, what happened? Why does society now expect digital privacy? Especially in the U.S., where privacy under the law is not so much a fundamental right as a tort? There are a number of factors, of course. But let’s consider three: a data breach that gained national attention, an international elevation of privacy rights and growing frustration with lax privacy regulations.

Our shift in the U.S. toward expecting more privacy started accelerating in December 2013 when Target experienced a headline-gathering data breach. The termination of the then-CEO and the subsequent following-year staggering operating loss, allegedly due to customer dissatisfaction and reputation erosion from this incident, got the boardroom’s attention. Now, data privacy and security are chief strategic concerns.

On the international stage, the European Union started experimenting with data privacy legislation in 1995. Directive 95/46/EC required national data protection authorities to explore data protection certification. This resulted in an opinion issued in 2011 which, through a series of opinions and other actions, resulted in the General Data Protection Regulation (GDPR) entering force in 2016. This timeline is well-documented on the European Data Protection Supervisor’s website.

It wasn’t until 2018, however, when we noticed GDPR’s fundamental privacy changes. Starting then, websites that collected personal data had to notify visitors and ask for permission first. Notice the pop-ups everywhere asking for permission to store cookies? That’s a byproduct of the GDPR.

What happened after that? Within a few short years, many local governments in the U.S. became more and more frustrated with the lack of privacy progress at the national level. GDPR was front and center, with several lawsuits filed against high-profile companies that allegedly failed to comply.

As the GDPR demonstrated the possible outcomes of serious privacy regulation, smaller governments passed such legislation. The State of California passed the California Consumer Privacy Act and—almost simultaneously—the State of New York passed the Personal Privacy Protection Law. Both of these legislations give U.S. citizens significantly more privacy protection than any under U.S. law. And not just to state residents, but also to other U.S. citizens whose personal data is accessed or stored in those states.

Without question, we as a society have changed course. The unfettered internet has had its day. Going forward, more and more private companies will be subject to increasingly demanding privacy legislation.

Is this a bad thing? Something nefarious? Probably not. Just as we have always expected privacy in our physical lives, we now expect privacy in our digital lives as well. And businesses are adjusting toward our expectations.

One visible adjustment is more disclosure about exactly what private data a business collects and why. Privacy policies are easier to understand, as well as more comprehensive. Most websites warn visitors about the storage of private data in “cookies.” Many sites additionally grant visitors the ability to turn off such cookies except those technically necessary for the site’s operation.

Another visible adjustment is the widespread use of multi-factor authentication. Many sites, especially those involving credit, finance or shopping, validate login with a token sent by email, text or voice. These sites then verify the authorized user is logging in, which helps avoid leaking private data.

Perhaps the biggest adjustment is not visible: encryption of private data. More businesses now operate on otherwise meaningless cipher substitutes (the output of an encryption function) in place of sensitive data such as customer account numbers, birth dates, email or street addresses, member names and so on. This protects customers from breaches where private data is exploited via an all-too-common breach.

Respecting privacy is now the norm. Companies that show this respect will be rewarded for doing so. Those that allegedly don’t, however, may experience a different fiscal outcome.

[Source: This article was published in securityboulevard.com By Jason Paul Kazarian - Uploaded by the Association Member: Jason Paul Kazarian]

Categorized in Internet Ethics

PimEyes markets its service as a tool to protect privacy and the misuse of images

Ever wondered where you appear on the internet? Now, a facial recognition website claims you can upload a picture of anyone and the site will find that same person’s images all around the internet.

PimEyes, a Polish facial recognition website, is a free tool that allows anyone to upload a photo of a person’s face and find more images of that person from publicly accessible websites like Tumblr, YouTube, WordPress blogs, and news outlets.

In essence, it’s not so different from the service provided by Clearview AI, which is currently being used by police and law enforcement agencies around the world. PimEyes’ facial recognition engine doesn’t seem as powerful as Clearview AI’s app is supposed to be. And unlike Clearview AI, it does not scrape most social media sites.

PimEyes markets its service as a tool to protect privacy and the misuse of images. But there’s no guarantee that someone will upload their own face, making it equally powerful for anyone trying to stalk someone else. The company did not respond to a request for comment.

PimEyes monetizes facial recognition by charging for a premium tier, which allows users to see which websites are hosting images of their faces and gives them the ability to set alerts for when new images are uploaded. The PimEyes premium tiers also allow up to 25 saved alerts, meaning one person could be alerted to newly uploaded images of up to 25 people across the internet. PimEyes has also opened up its service for developers to search its database, with pricing for up to 100 million searches per month.

Facial recognition search sites are rare but not new. In 2016, Russian tech company NtechLab launched FindFace, which offered similar search functionality, until shutting it down in a pivot to state surveillance. Founders described it as a way to find women a person wanted to date.

“You could just upload a photo of a movie star you like, or your ex, and then find 10 girls who look similar to her and send them messages,” cofounder Alexander Kabakov told The Guardian.

The PimEyes premium tiers also allow up to 25 saved alerts, meaning one person could be alerted to newly uploaded images of up to 25 people across the internet.

While Google’s reverse image search also has some capability to find similar faces, it doesn’t use specific facial recognition technology, the company told OneZero earlier this year.

“Search term suggestions rely on aggregate metadata associated with images on the web that are similar to the same composition, background, and non-biometric attributes of a particular image,” a company spokesperson wrote in February. If you upload a photo of yourself with a blank background, for example, Google may surface similarly composed portraits of other people who look nothing like you.

PimEyes also writes on its website that it has special contracts available for law enforcement that can search “darknet websites,” and its algorithms are also built into at least one other company’s application. PimEyes works with Paliscope, software aimed at law enforcement investigators, to provide facial recognition inside documents and videos. Paliscope says it has recently partnered with 4theOne Foundation, which seeks to find and recover trafficked children.

There are still many open questions about PimEyes, like exactly how it obtains data on people’s faces, its contracts with law enforcement, and the accuracy of its algorithms.

PimEyes markets itself as a solution for customers worried about where their photos appear online. The company suggests contacting websites where images are hosted and asking them to remove images. But because anyone can search for anyone, services like PimEyes may generate more privacy issues than they solve.

[Source: This article was published in onezero.medium.com By Dave Gershgorn - Uploaded by the Association Member: Grace Irwin]

Categorized in Search Engine

Privacy-preserving AI techniques could allow researchers to extract insights from sensitive data if cost and complexity barriers can be overcome. But as the concept of privacy-preserving artificial intelligence matures, so do data volumes and complexity. This year, the size of the digital universe could hit 44 zettabytes, according to the World Economic Forum. That sum is 40 times more bytes than the number of stars in the observable universe. And by 2025, IDC projects that number could nearly double.

More Data, More Privacy Problems

While the explosion in data volume, together with declining computation costs, has driven interest in artificial intelligence, a significant portion of data poses potential privacy and cybersecurity questions. Regulatory and cybersecurity issues concerning data abound. AI researchers are constrained by data quality and availability. Databases that would enable them, for instance, to shed light on common diseases or stamp out financial fraud — an estimated $5 trillion global problem — are difficult to obtain. Conversely, innocuous datasets like ImageNet have driven machine learning advances because they are freely available.

A traditional strategy to protect sensitive data is to anonymize it, stripping out confidential information. “Most of the privacy regulations have a clause that permits sufficiently anonymizing it instead of deleting data at request,” said Lisa Donchak, associate partner at McKinsey.

But the catch is, the explosion of data makes the task of re-identifying individuals in masked datasets progressively easier. The goal of protecting privacy is getting “harder and harder to solve because there are so many data snippets available,” said Zulfikar Ramzan, chief technology officer at RSA.

The Internet of Things (IoT) complicates the picture. Connected sensors, found in everything from surveillance cameras to industrial plants to fitness trackers, collect troves of sensitive data. With the appropriate privacy protections in place, such data could be a gold mine for AI research. But security and privacy concerns stand in the way.

Addressing such hurdles requires two things. First, a framework providing user controls and rights on the front-end protects data coming into a database. “That includes specifying who has access to my data and for what purpose,” said Casimir Wierzynski, senior director of AI products at Intel. Second, it requires sufficient data protection, including encrypting data while it is at rest or in transit. The latter is arguably a thornier challenge.

[Source: This article was published in urgentcomm.com By Brian Buntz - Uploaded by the Association Member: Bridget Miller]

Categorized in Internet Privacy

New search engine Kilos is rapidly gaining traction on the dark web for its extensive index that allows users access to numerous dark web marketplaces.

A new search engine for the dark webKilos, has quickly become a favorite among cybercriminals and here’s why.

It all began when the dark web search engine, Grams, launched in April 2014. Grams was an instant hit, proving useful not only to researchers but cybercriminals too.

The search engine used custom APIs to scrape some of the most prominent cybercriminal markets at the time. These include AlphaBayDream Market, and Hansa.

In addition to helping searchers find an illicit product using simple search terms, Grams also provided Helix, a Bitcoin mixer service. That way, users can conveniently hide their transactions on the platform.

Yes, Grams was a revolutionary tool for cybercriminals on the dark web. But, it’s index was still relatively limited.

In a Wired interview, an administrator stated that the team behind Grams didn’t have the capabilities to crawl the whole darknet yet. So, they had to create an automated site submitter for publishers to submit their site and get listed on the search engine.

Despite Grams’ success, it would not remain for long. In 2017, the administrators shut down the search engine’s indexing ability and took the site down.

However, a new search engine would eventually rise to take Grams’ place two years later.

Kilos Became the Favorite Search Engine on the Dark Web

In November 2019, talks of a new dark web-based search engine called Kilos started making rounds on cybercriminal forums.

According to Digital Shadows, it’s uncertain whether Kilos has pivoted directly from Grams or if the same administrator is behind both projects. However, the initial similarities are uncanny.

For example, they both share a similar search engine-like aesthetics. Also, the naming convention remained the same, following the unit for weight or mass measurement.

Expectedly, Kilos pack more weight than Grams ever did.

Thanks to the new search engine, searchers can now perform more specific searches from a more extensive index. Kilos enable users to search across six of the top dark web marketplaces for vendors and listings.

These include CryptoniaSamsaraVersusCannaHomeCannazon, and Empire.

According to Digital Shadows, Kilos has already indexed 553,994 forum posts, 68,860 listings, 2,844 vendors, and 248,159 reviews from seven marketplace and six forums. That’s an unprecedented amount of dark web content.

What’s more, the dark web search engine appears to be improving, with the administrator introducing new updates and features. Some of these features include:

  • Direct communication between administrator and users
  • A new type of CAPTCHA to prevent automation
  • Advanced filtering system
  • Faster searches and a new advertising system
  • New Bitcoin mixer called Krumble

Kilos are gradually becoming the first stop for dark web users. From individuals looking to purchase illicit products to those searching for specific vendors, tons of users now depend on the search engine.

This could further increase the amount of data that’s available to security researchers as well as threat actors.

[Source: This article was published in edgy.app By Sumbo Bello - Uploaded by the Association Member: Jennifer Levin]

Categorized in Search Engine
Page 1 of 8

AOFIRS

World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.