fbpx
Logan Hochstetler

Logan Hochstetler

SAN FRANCISCO (CNNMoney) - Our brains often forget where we saw something among the countless tabs and documents on our computers each day.

To make it easier to find things, Seattle-based Atlas Informatics launched Atlas Recall, which lets you search for anything you've ever looked at on your computer.

Atlas Informatics founder and CEO Jordan Ritter calls the software "a photographic memory for your digital life." In a demonstration to CNNMoney, that proved to be a fairly accurate assessment.

Once installed, Atlas Recall displays personalized search results from the app, desktop search, or Google search. This includes web pages, emails, Slack chats, Netflix films, Spotify songs, or anything else that's appeared in front of your eyes on your screen.

Let's say you're planning a wedding. You can search for "wedding," and Atlas Recall will pull up calendar appointments, emails from your wedding party, websites of flower companies, photos of wedding dresses, the Spotify playlist you listened to when emailing your fiancé, and the Facebook wedding planning group you're a part of.

You can search by keyword, content type or time, and it displays all related information based on relevancy. For instance, if two documents were open at the same time and you toggled between them, they will both appear whether or not they contain a keyword.

Once installed on your hard drive and browser, Atlas Recall runs in the background and begins collecting your activity. The company captures all the content you've looked at and stores it on its servers.

So how does it work?

Computers have built-in features for people with disabilities to use hardware and software. These features -- like Apple's VoiceOver -- use accessibility APIs, the programming tools that pass information between the computer and the user. Atlas Recall taps into these tools and indexes all the data shared on-screen.

Though Atlas Recall is a unique product, it's similar to Google's ecosystem that saves and tracks everything you do. When you're logged in to Google services, it collects and saves your activity, from where you go via Google Maps, to appointments you make with Google Calendar. You can search these services to find personal data.

But Atlas Recall takes that behavior and applies it to literally everything you do with your computer.

"The platform wars are over, nobody won, and no one will ever win them again," Ritter told CNNMoney. "We now have diverse sets of apps and platforms and services, and we move fluidly between all of them. What we want is something that works the way we use our devices and data."

The tool understandably raises privacy concerns.

Ritter, cofounder of Napster and no stranger to controversial file-sharing, said all the data is encrypted while it's transferred to Atlas Cloud and stored in its servers. You can block Atlas from reading and indexing links, files, and apps -- say, block it from reading your Gmail -- remove stored data, or pause Atlas Recall so it's not running in the background indefinitely.

Atlas Recall is launching in beta on Wednesday for Apple devices running macOS Sierra or OS X El Capitan. It also has a compatible iPhone app that lets you search for items, but does not index content from your phone. Ritter said an app for Windows 10 is coming soon.

The search engine is the first product from Atlas Informatics, and it's free for people to try out while in beta. The company plans to charge for premium features in the future. The Emerald City startup has raised $20.7 million from investors including Microsoft, Nathan Myhrvold and Aspect Ventures.

Source : click2houston

When it comes to SEO, you can never be sure what is right and what is not, especially when Google and other search engines change their policies so often. It even happens that you are working hard on certain SEO tactics and then realize you’ve been penalized for them in the end. That’s why you should try methods that do not just bring results, but are fully approved by search engines. Fixing the mistakes is painful, so you’d better not do any. Here are a few worthy tips to do SEO right from the very beginning.

1)  Check the guidelines occasionally

If you are involved in SEO marketing, try not to miss any update made by Google or other search engines to their optimization guidelines. They may change keyword density norms, the lengths of meta titles and meta descriptions, H1 tags optimization rules, etc. You need to be aware of all the changes made to the guidelines if you take your SEO job seriously. Playing tricks on Google is a risky business and may cost you your website reputation. Play safe with Google and avoid any black SEO tactics even if you saw other marketers using them and bringing massive traffic to their websites. Sooner or later, any suspicious activity gets penalized.

2) On-site SEO first

Before you start with link-building and other external SEO practices, make sure your website has a good internal optimization. Practically all the major elements need to be optimized for better search engine visibility and streamlined user experience. Optimize titles and descriptions, tags, internal links, etc. Check for broken links and ensure friendly website navigation.

 

This is particularly important for large websites with complex structures (e.g. news websites). Check your website for any broken code by using any of the markup validation services. Compress your images to make sure they load faster. Use .jpg formats instead of .png. Compress videos, presentations, and other multimedia. It will ensure faster loading speed.

3) Regular website updates

Search engines prefer websites that have new portions of content emerging on a regular basis. Choose SEO tactics that are tightly related to your content marketing plan. The best way to add new portions of valuable and informative content to your website is running a blog. Adding posts twice or once a week would be enough for search engines and your visitors. In addition, make regular updates to your website descriptions, enhance them, add new services, provide new images illustrating your products, and create videos showing your team working or product reviews. Publishing video reviews for each product is a good way to have the regular flow of high-quality content on your website. New testimonials and portfolio items will also add trust to your website in the eyes of visitors and search engines.

Speaking of a blog, do not publish only written posts. Create infographics, publish videos and combine them with the text. Embed carefully-selected keywords into your website and into each blog post you publish. Stick to the recommended content length – experts suggest that 2000-2500-word posts work the best for SEO. Write meta titles and meta descriptions for each piece of content you publish. Use consistent keywords for the search engines to rank your content higher. 

4)  Here comes off-site SEO

When your on-site SEO is taken care of, you can start with link building and other off-site SEO tactics. As mentioned earlier, SEO should always be centered around content. Thus, one of the most effective ways to build links is through content marketing. Create an editorial calendar and adhere to it strictly. You need to think about all the suitable places for your external publications. These can be news submission websites, article submission directories, social media. Guest blogging works undeniably well if done right. You choose the websites related to your industry, reach out to their owners and place your content there – this is how it actually works. 

Publications written by CEO's or profound industry experts get more attention and generate more exposure. Various case studies, white papers, and SlideShare presentations tend to generate a lot of traffic coming to your website. Before you employ your content marketing and link-building tactics, think about your in-house skills required for this, as well as those tasks you need to outsource. Create a detailed plan outlining your editorial calendar and frequency of your publications. Promote everything in social media – it is one of the most popular ways to build links to your website.

5) Reduce competition

As writing is going to be the major part of your SEO efforts, you need to choose the topics carefully. Try not to write about things that everyone else writes about. It will be difficult to compete with them. In addition, you will have to choose highly-competitive keywords for them. Write on something specific instead of too general things. Write about the aspects you have the deepest knowledge about. In the same way, choose very specific keywords that have little competition. Just remember that when reaching out to anyone, you reach out to no one. After you choose the main keywords, it is recommended to support them with long-tail keywords. Don’t try to cover everything in your single post – general topics have already been described thousands of times.

6)  Write for Humans

Some unexperienced SEO marketers make their content sound unnatural by optimizing it too much. Irrelevant keywords, their excessive use, machine translation from other languages – visitors will stop reading such content as soon as they notice such things. You have probably read articles that have just been translated from another language and no one bothered to correct at least the most embarrassing mistakes. Personally, I leave such websites right away and so do other visitors. This type of content is not going to do any good to your search engine rankings, so you’d better concentrate on quality instead of quantity. If you write for real people, you will encourage them to comment on your publications, share them and come back for more. This is exactly what search engines will reward you for. 

Conclusion

The choice of SEO tactics cannot be an endless process. You need to act, and the sooner you act, the sooner you understand whether they work for you or not. However, remember that black SEO is not the tactic you should experiment with. Today’s SEO is more and more dependent on content – which is why it is sometimes hard to draw the border between what SEO is and what content marketing is. Create your SEO plan and content marketing plan and you will see where they overlap.

Source : promotionworld

Wednesday, 02 November 2016 06:02

5 Best Apps to Monitor Data Usage on Android

We’re in a computing era where the internet is mostly used from a mobile device. Right from waking up in the morning to falling asleep at night, we access the internet mostly from our mobile devices. With such excessive usage comes the never ending hike in mobile data plans. So, it’s no surprise if you want to keep track of the data usage on your Android smartphone. Staying within the limits of our limited data plan is a tough task for sure and thus, here is our list of the 5 best apps to monitor data usage on Android:

1. Traffic Monitor

network-usage-monitor-1.jpg

 

The Traffic Monitor app is an all-in-one app that provides various in-app utilities like the Speed Test, App Data Usage Monitor, Signal Quality check and Network info. You can set a data usage limit after installing the app and it will start a Data Billing Cycle. Also, you can check data usage by location for home and work. There’d also an option to set widgets to get data usage report right on the home screen.

Install: (free)

2. My Data Manager

network-usage-monitor-2_1.jpg

My Data Manager is a more advanced app to manage network usage and keep track of mobile data usage. What makes it advanced is that it lets you set your daily budget for your mobile data and if your usage exceeds, you’ll get an alert. Apart from that, it also keeps a history of the data used throughout the day. You can set multiple plans for broadband, mobile data and also for roaming network usage.

 

Also, it packs in some great alerting features. It can send you alerts for Forecast usage warning, Billing cycle and if there are lots of data left during the last days of your data plan. You can even set a custom alert. The persistent notification in the notification bar also comes in handy for a quick look at data usage.

 

Install: (Free)

3. Data Usage Monitor

network-usage-monitor-3.jpg

 

If you are looking for a simple data monitoring app, the Data Usage Monitor app should be a great choice. The app doesn’t pack any other eye-catching features but it excels with its usage-centric approach. You start with setting your data limits and positioning the baud rate meter on your screen. Thereafter, you can keep track of the data usage from the home screen of the app. It shows per hour data usage and per app data consumption.

beebom

Install: (Free with in-app purchases)

4. Opera Max

network-usage-monitor-4.jpg

 

Opera Max is a well-known app for data saving on Android devices, but it can also help you keep track of the data usage. Its timeline approach for data usage statistics is a great way to know which apps in the background might have consumed data. You can further go into App Management tab and turn off the background data usage for the particular app. Overall, Opera Max is a pretty handy data monitoring app and its additional data saving features are a real bonus.

Install: (Free

5. Network Connectionsnetwork-usage-monitor-5.jpg

Network Connection is one of those apps that will help you check which apps are spying on your data in the background. Normally such apps need to have root access but Network Connections requires no root. To start network monitoring, you’ll have to tap on “Start Live Capture”. It’ll show the IP address of the app servers to which the network is sending/receiving the data. Tap on any IP address and it’ll show you more information regarding any spying done by apps and where is their destination server. The Live Capture is limited to the trial version. You’ll need the unlock key for full access.

Install: (FreePro key $3.99)

Source : beebom

 

 

 

 

What should SEOs do to make the best of the new Penguin update? Perhaps not much. Columnist Dave Davies notes that while Penguin 4.0 was indeed significant, things ultimately haven't changed that much.

For the last four-plus years now, we’ve heard a lot about Penguin. Initially announced in April 2012, we were told that this algorithm update, designed to combat web spam, would impact three percent of queries.

More recently, we’ve witnessed frustration on the part of penalized website owners at having to wait over a year for an update, after Google specifically noted one was coming “soon” in October of 2015.

In all the years of discussion around Penguin, however, I don’t believe any update has been more fraught with confusing statements and misinformation than Penguin 4.0,the most recent update. The biggest culprit here is Google itself, which has not been consistent in its messaging.

And this is the subject of this article: the peeling away of some of the recent misstated or just misunderstood aspects of this update, and more importantly, what it means for website owners and their SEOs.

So, let’s begin.

What is Penguin?

Note: We’re going to keep this section short and sweet — if you want something more in-depth, you should begin by reading Danny Sullivan’s article on the initial release of Penguin, “Google Launches ‘Penguin Update’ Targeting Webspam In Search Results.” You can also browse Search Engine Land’s Penguin Update section for all the articles written here on the topic.

The Penguin algorithm update was first announced on April 24, 2012, and the official explanation was that the algorithm targeted web spam in general. However, since the biggest losses were incurred by those engaged in manipulative link schemes, the algorithm itself was viewed as being designed to punish sites with bad link profiles.

I’ll leave it at that, with the assumption that I shouldn’t bore you with additional details on what the algorithm was designed to do. Let’s move now to the confusion.

Where’s the confusion?

Until Penguin 4.0 rolled out on September 23, 2016, there really wasn’t a lot of confusion around the algorithm. The entire SEO community — and even many outside it — knew that the Penguin update demoted sites with bad links, and it wasn’t until it was next updated that an affected site could expect some semblance of recovery.

The path was clear: a site would get hit with a penalty, the website owner would send out requests to have offending links removed, those that couldn’t be removed would be added to a disavow list and submitted, and then one would simply wait.

 

However, things got more complicated with this most recent update — not because the algorithm itself got any more difficult to understand, but rather because the folks at Google did.

In essence, there were only a couple of major changes with this update:

  1. Penguin now runs in real time. Webmasters impacted by Penguin will no longer have to wait for the next update to see the results of their improvement efforts — now, changes will be evident much more quickly, generally not long after a page is recrawled and reindexed.
  2. Penguin 4.0 is “more granular,” meaning that it can now impact individual pages or sections of a site in addition to entire domains; previously, it would act as a site-wide penalty, impacting rankings for an entire site.

It would seem that there isn’t a lot of room for confusion here on first glance. However, when the folks at Google started adding details and giving advice, that ended up causing a bit of confusion. So let’s look at those to get a better understanding of what we’re expected to do.

Disavow files

Rumor had it, based on statements by Google’s Gary Illyes, that a disavow file is no longer necessary to deal with Penguin-related ranking issues.

This is due to a change in how Penguin 4.0 deals with bad links: they now devalue the links themselves rather than demoting the site they’re linking to.

Now, that seems pretty clear. If you read Illyes’ statements in the article linked above, there are a few takeaways:

  1. Spam is devalued, rather than sites being demoted.
  2. There’s less need to use a disavow file for Penguin-related ranking penalties.
  3. Using the disavow file for Penguin-related issues can help Google help you, but it is more specifically useful for sites under manual review. 

So now we have a “yes, you should use it for Penguin” and a “no, you don’t need it for Penguin.” But wait, it gets more fun. On October 4, 2016, Google Webmaster Trends Analyst John Mueller stated the following in an Office Hours Hangout:

[I]f these are problematic links that are affected [by Penguin], and you use a disavow file, then that’s a good way for us to pick that up and recognize, “Okay, this link is something you don’t want to have associated with this site.” So when we recrawl that page that is linking to you, we can drop that link from our link graph.

With regards to devaluing these low quality links instead of punishing you, in general we try to figure out what kind of spammy tactics are happening here and we say, “Well, we’ll just try to ignore this part with regards to your website.”

So… clear as mud?

The disavow takeaway

The takeaway here is that the more things change, the more they stay the same. There is no change. If you’ve used unethical link-building strategies in the past and are considering submitting a disavow file — good, you should do that. If you haven’t used such strategies, then you shouldn’t need to; if Google finds bad links to your site, they’ll simply devalue them.

Of course, it was once also claimed that negative SEO doesn’t work, meaning a disavow wasn’t necessary for bad links you didn’t build. This was obviously not the case, and negative SEO did work (and may well still), so you should be continuing to monitor your links for bad ones and adding them to your disavow file periodically. After all, if bad links couldn’t negatively impact your site, there would be no need for a disavow at all.

And so, the more things change, the more they stay the same. Keep doing what you’ve been doing.

The source site?

In a recent podcast over on Marketing Land, Gary Illyes explains that under Penguin, it’s not the target site of the link that matters, it’s the source. This doesn’t just include links themselves, but other signals a page sends to indicate that it’s likely spam.

So, what we just were informed is that the value of a link comes from the site/page it’s on and not where it’s pointing. In other words, when you’re judging your inbound links, be sure to look at the source page and domain of those links.

The more things change, the more they stay the same.

Your links are labeled

In the same podcast on Penguin, it came to light that Google places links on a page into categories, including things like:

  • footer;
  • Penguin-impacted; and
  • disavowed.

It was suggested that there are other categories, but they weren’t named. So, what really does this mean?

It means what we all pretty well knew was going on for about a decade. We now have a term to use to describe it (“labels”) rather than simply understanding that a page is divided into sections, and the sections that are the most visible and more likely to be engaged with hold the highest value (with regard to both content and links).

Additionally, we already knew that links that were disavowed were flagged as such.

 

There is one new side

The only really new piece of information here is that either Google has replaced a previous link weighting system (which was based on something like visibility) with a labeling system, or they have added to it. Essentially, it appears that where previously, content as a whole may have been categorized and links included in that categorization, now a link is given one or possibly multiple labels.link-labels.jpg

So, this is a new system and a new piece of information, which brings us to…

The link labeling takeway

Knowing whether the link is being labeled or simply judged by its position on the page — and whether it’s been disavowed or not — isn’t particularly actionable. It’s academically interesting, to be sure, and I’m certain it took Google engineers many days or months to get it figured out (maybe that’s what they’ve been working on since last October). But from an SEO’s perspective, we have to ask ourselves, ”What really changed?”

Nothing. You will still be working to develop highly visible links, placed contextually where possible and on related sites. If this strays far from what you were doing, you likely weren’t doing your link building correctly to begin with. I repeat: the more things change, the more they stay the same.

But not Penguin penalties, right? Or… ?

It turns out that Penguin penalties are treated very differently in 4.0 from the way they were previously. In a discussion with Google’s Gary Illyes, he revealed that there is no sandbox for a site penalized by Penguin. 

So essentially, if you get hit with a Penguin penalty, there is no trust delay in recovery — once you fix the problem and your site is recrawled, you’d bounce back.

That said, there’s something ominous about Illyes’ final tweet above. So Penguin does not require or impose a sandbox or trust-based delay… but that’s not to say there aren’t other functions in Google’s algorithm that do.

So, what are we to conclude? Avoid penalties — and while not Penguin-related, there may or may not be delays in recovering from one. Sound familiar? That’s because (surely you can say it with me by now)…

The more things change, the more they stay the same

While this was a major update with a couple of significant changes, what it ultimately means is that our SEO process hasn’t really changed at all. Our links will get picked up faster (both the good and the bad), and penalties will likely be doled out and rolled back much more reliably; however, the links we need to build and how they’re being weighted remain pretty much the same (if not identical). The use of the disavow file is unchanged, and you should still (in my opinion) watch for negative SEO.

The biggest variable here comes in their statement that Penguin is not impacted by machine learning: 

I have no doubt that this is currently true. However, now that Penguin is part of the core algorithm — and as machine learning takes on a greater role in how search engines rank pages — it’s likely that it will eventually begin to control some aspects of what are traditionally Penguin algorithms.

But when that day comes, the machines will be looking for relevancy and maximized user experience and link quality signals. So the more you continue to stay focused on what you should be doing… the more it’ll stay the same.

Source : searchengineland

ICIT Fellow Robert Lord discusses the exploitation of protected health information on the deep web and gives cybersecurity tips on how to best protect these valuable records.

The deep web is used for both practical and illegal uses; individuals can utilize the anonymity of the deep web to protect their personal information, communicate clandestinely with sources or whistleblowers and engage in illegal practices such as selling and purchasing the personal information of others.

In this Q&A, Robert Lord, ICITFellow and CEO of healthcare and cybersecurity company Protenus, discusses the deep web and its role in the exploitation of patients' protected health information.

Can you explain what exactly the deep web is and how illegal practices are able to be conducted on it? Why should people know about it, and should they be worried?

If you think about the internet in general, think about it like an iceberg. We see the tip of it known as the "clear net" or the "indexed web." That is the information that search engines index and what we can Google -- our everyday internet. Four hundred to five hundred times more data is housed underneath the iceberg. That is the deep web that's not accessible by normal means; we need to use tools like the Tor browser.

You can use a different set of protocols to gain information, such as documents, or to preserve privacy when journalists communicate confidently and privately with sources. It is also used by citizens whose countries block the use of internet, and it is used, unfortunately, for a lot of criminal activity. 

 

 

PRO+Content 

E-Handbook:Digital governance and compliance tactics for the regulated business

E-Handbook:Blockchain GRC: The innovation and regulatory balance 

E-Handbook:Drawing business value from GRC analytics

Thinking of the deep web more as a tool for anonymity is instructive. The deep web can protect people who have legitimate fears about a government crackdown or individuals who have private information, such as journalists, who want to protect sources and whistleblowers.

What specific risks does the deep web pose to electronic, protected health information and how do these risks influence health providers' HIPAA compliance processes? 

As a nation, we are in a serious crisis right now. What we did was spend tens of millions of dollars rolling out electronic health records. We put very little thought into how we were going to protect that data.

We digitized 300 million Americans' lives, but all that information is protected in a rather weak way. Unfortunately, hackers and insiders have decided that healthcare is a very soft target, and that protected health information is extraordinarily valuable.

Business associates, contractors and employees all have the ability to access that information, and it can then be sold on markets like the deep web. On the deep web are markets that use anonymous profiles that show whether people are certified buyers or sellers, providing an easy way to sell data. The deep web facilitates this transfer of monetized data. In 2015, there were 113 million medical recordsthat were breached -- a third of our nation's medical records.

Among companies whose patients' data was leaked, were compliance regulations being met or did the companies fall short and allow this exploit of patients' protected health information to be possible?

 

One of the main challenges that hospitals face is extraordinarily constraining budgets. There are all sorts of mandates from the government and industry organizations that are pushing them, and cybersecurity is not prioritized in budget allocation. There's a huge challenge with hospitals investing strategically in cybersecurity resources.

Hospitals have gotten a lot of vendor fatigue. What's happened is that oftentimes they're not looking towards the next generation technology, the technology that they need to protect electronic health records, because there's so much information being presented to them that they don't know what to look at and what not to look at.

What advice can you offer to healthcare providers and other businesses to prevent these types of exploits from happening in the future?

An extraordinarily small fraction of healthcare companies' budgets -- 5% -- is spent on cybersecurity. Other companies with less sensitive information, such as financial institutions, spend about 12-15% of their budget on cybersecurity. Healthcare is the most popular target for hackers. Companies have to educate their workforce on security and privacy and why healthcare data is so valuable.

Are there any particular cybersecurity strategies that are proving effective against deep web exploits? How can companies best protect business and customer information from them?

One of the really important things is to have a C-suite and board of director level buy-in for cybersecurity. The most successful organizations we work with have a direct line of communication that allows the security and privacy groups to communicate with the board and C-suite to set goals and communicate cybersecurity strategies.

Source : searchcompliance

Saturday, 22 October 2016 06:01

iPhone 7 Secret Will Anger Everyone

Every iPhone 7 is the same, right? Wrong

Shaking customers’ belief that the only difference between iPhone 7 models is their storage capacity is a new video from tech’s most popular YouTuber Lewis Hilsenteger aka Unbox Therapy – and it shows 32GB iPhone 7 owners are getting a very raw deal indeed. 

In the video Hilsenteger reveals the entry level 32GB iPhone 7 delivers dramatically worse performance than both the 128GB and 256GB models – and his test results are so significant they may change your purchase plans or even motivate you to exchange your 32GB model.

Here’s the recap: both app benchmarking and straight data transfers show the 32GB model of the iPhone 7 has been equipped with storage which is far slower than the more expensive models.

So how does this translate into real life? A good example is Hilsenteger demonstrates copying a high definition movie to a 32GB iPhone 7 takes 40% longer than to a 256GB iPhone 7. Meanwhile benchmarking the two models shows the 32GB option (which managed 42.4 megabytes per second) is almost 9x slower than the 256GB model (341MB per second). 

I can add a further benchmark to this, having tested the 128GB iPhone 7 it delivers write speeds of 298MB/s – slightly slower than the 256GB option, but clearly emphasising a seismic gap to the 32GB cheapest model.

Interestingly Hilsenteger claims that the 32GB models of the $100 more expensive iPhone 7 Plus perform just as badly as the 32GB iPhone 7, but he doesn’t demonstrate this on video. I contacted Apple for a response to these revelations, but following a two day wait the company declined to issue a formal statement. I will update this post if that changes.

What To Think?

So how bad is this? In short, it’s not great.

On the one side there is a defence: solid state storage (which all smartphones use) operates with chips that run in parallel. Consequently the more chips (aka more memory) you add, the more chips can operate together and the faster they can go. Therefore there is no reason the 128GB and 256GB versions of the iPhone 7 should not be able to perform faster than the 32GB model.

And yet this isn’t a full defence, in fact it’s rather disingenuous. To use this argument ignores the fact that 42.4MB/s is not remotely near the limits of what 32GB solid state storage can achieve. Need an example? The 32GB Galaxy S7 Edge is benchmarked at 150MB/s while I benchmarked Google’s new 32GB Pixel XL which achieved a mouthwatering 301MB/s:

Google Pixel XL 32GB write speeds illustrate 32GB should not be a limiting factor on performance. Image credit: Gordon Kelly

Google Pixel XL 32GB write speeds illustrate 32GB should not be a limiting factor on performance. Image credit: Gordon Kelly

All of which causes cracks to appear. 42.4MB/s speeds will not make your iPhone 7 work at a snail’s pace, but this is a clear road bump waiting to happen when considering how future proof you want your shiny new iPhone 7 to be.

On top of this questions have to be asked about how Apple is marketing its 32GB, 128GB and 256GB iPhone 7 and iPhone 7 Plus. Its website does have a footnote beside the ‘Capacity’ listing for its models, but all the footnote says is:

“Available space is less and varies due to many factors. A standard configuration uses approximately 4GB to 6GB of space (including iOS and built-in apps) depending on the model and settings.”

Nowhere is there a hint that there is a major performance discrepancy between the models and when asking your customers to part with $649 (32GB iPhone 7), $749 (32GB iPhone 7 Plus) or sign a multi year binding carrier contract, they have the right to be very angry indeed.

Source : forbes

Saturday, 01 October 2016 09:16

How Google Has Changed the World

It’s incredible that it took just 18 years for Google -- the company reached this milestone of adulthood on Sept. 27 -- to create a market capitalization of more than $530 billion. It’s perhaps even more amazing to recall how the search engine has changed life as we know it.

Google, now a unit of holding parent company Alphabet Inc., began in Larry Page and Sergey Brin’s Stanford University dorm in 1998 before campus officials asked them to find a real office after the Stanford IT department complained Page and Brin’s were sucking up all the university’s bandwidth.

By the time I joined the company in November of 2001, it was apparent that we were changing the world. As an early employee at Google -- the second attorney hired there -- there were times when shivers ran up my spine thinking about what we were building. Democratizing access to information, and bringing the real world online -- it was an inspiring place to be.

Having grown up in a working class neighborhood, I had to travel to an affluent neighborhood to access a good public library, spending countless Saturday afternoons with volumes of reference books to learn how to apply for financial aid to attend college. In those pre-Internet days, a good library and a kind-hearted librarian were my keys to advancement.

After the printing press, the first major democratization of access to information had been driven a century ago by steel baron Andrew Carnegie. He became the world’s richest man in the late 19th century and then gave it all away, donating $60 million to fund 1,689 public libraries across the United States. To my mind, Google took Carnegie’s vision of putting information in the hands of the general public and put it on steroids, creating a virtual library akin to those found only in sci-fi movies in 1998.

Google indexed the internet extraordinarily well without human intervention, unlike previously curated outlets such as Yahoo! or LexisNexis, and in such a way that the user did not have to know how to use the index or Boolean search methods. Google enabled free searches of words or terms, making all manner of information instantly retrievable even if you did not know where it was housed. With Google, you could find any needle in any haystack at any time. Unlocking that data has indeed been a great equalizer: any individual can arm him or herself with relevant information before seeing a doctor or applying for government assistance, housing or a job.

 

Getting archives online

Soon, Google could trivially retrieve any piece of data on the World Wide Web. Crucially, Google started indexing information that was previously offline, such as far-flung archives (imagine a very old text in a tower in Salamanca) to make that knowledge searchable. People’s photos and videos followed. Then, of course, Google cars began cruising and mapping our streets. That paired with GPS granted us all a new superpower -- being able to find our way in almost any small town or big city in the world.

Now Google is a global archive storing our history as it is made. It is as though a virtual world is being created right alongside our real world, a simulation of reality that grows more robust by the day. Because of Google, the creation and storage of information itself has expanded exponentially as people and scholars have access to information that enables them to make new discoveries. Those discoveries, in turn, are shared with the world thanks to the culture of sharing that has been central to the internet and Google’s philosophy. All this has sped the pace of discovery.

Of course, there have been casualties. Google has changed the business of newspapers forever and virtually single-handedly run most publishers of maps out of business. It transformed advertising, using and perfecting A/B testing to understanding our tastes and what makes a person click on an ad. Sometimes I worry that technology companies have become almost too good at this, building upon and applying these lessons to other ways of collectively sucking us into our devices more and more.

This access to information without the curation of trained journalists carries other costs too, leading to an internet rife with misinformation and untruth. Nowhere is that more evident today than in our rancorous U.S. presidential election, where it seems little value is placed on objectivity, making organizations such asfactcheck.org essential reading. The growth of Google and the diminution of the role of the established media in our society at such crucial moments might cause Alexis de Tocqueville, who believed newspapers “ maintain civilization,” to turn in his grave.

One thing’s for sure: With Google, the future will bring the unexpected and sometimes delightful. Autonomous cars, robots, gesture-sensing fabrics, hands-free controls, modular cell phones and reimagined cities are among the projects that lie ahead for the search giant that even as it is one of the world’s largest companies, has maintained a startup culture at its offices, which now employmore than 61,000 people.

In breaking out beyond the constraints of the online world into the physical universe, Google has made us believe (and even expect) that when one is inspired by some great purpose, we can transcend limitations. Anything becomes possible.

Source : http://www.foxnews.com/

Republican presidential nominee Donald Trump has described the US economy as “false,” saying that the central banking system is intentionally keeping interest rates low to prevent a new economic collapse.

“We have a very false economy,” Reuters reported Trump as saying in answering to a journalist’s question while campaigning in Ohio on Monday.

“They’re keeping the rates down so that everything else doesn't go down,” Trump added in response to the question, which was about a possible rise in interest rates by the Federal Reserve this month.

“At some point the rates are going to have to change,” Trump said.“The only thing that is strong is the artificial stock market.”

The ideas on rebuilding the US economy offered by the billionaire in an interview to Fortune magazine in April have been dubbed as questionable by some, while others argue that his approach may work out just fine. 

“We have to rebuild the infrastructure of our country. We have to rebuild our military, which is being decimated by bad decisions. We have to do a lot of things. We have to reduce our debt, and the best thing we have going now is that interest rates are so low that lots of good things can be done – that aren’t being done, amazingly,” Trump said back in April.

Meanwhile, Democratic presidential frontrunner Hillary Clinton has not been so radical in her future plans concerning the US economy. However, she promised to support a shakeup in the top ranks of the Federal Reserve in an effort to increase diversity and minority representation within the Fed, Clinton’s campaign said in a statement back in May.

“The Federal Reserve is a vital institution for our economy and the wellbeing of our middle class, and the American people should have no doubt that the Fed is serving the public interest,” the statement said.

“That's why Secretary Clinton believes that the Fed needs to be more representative of America as a whole and that common sense reforms – like getting bankers off the boards of regional Federal Reserve banks – are long overdue.”

The Fed is currently headed by a board of governors based in Washington along with a dozen regional bank presidents spread across the US. The board is nominated by the White House and then approved by the Senate. Regional bank presidents, on the other hand, are chosen by their boards of directors, which are chosen by the banking industry and by Washington Fed governors.

Source : https://www.rt.com/usa/358365-trump-fed-false-economy/

Friday, 02 September 2016 04:26

10 PROS AND CONS OF SOCIAL MEDIA

Because of the way the internet has changed the way we communicate and interact with one another on so many levels; it’s become necessary to explore the pros and cons of social media and its effects on our society.

The Pros

1-Increased criminal prosecution because of social media

The NYC police department began using Twitter back in 2011 to track criminals foolish enough to brag about their crimes online. When the Vancouver Canucks lost the Stanley Cup in 2011, their Vancouver fans took to the streets and rioted, but local authorities used social media to track and tag the people involved, and they caught people who were stealing during the riot.

2-Social networking creates new social connections

Statistics show that 70% of adults have used social media sites to connect with relatives in other states, and 57% of teens have reported making new friendships on social media sites.

3-Students are doing better in school

This is an interesting statistic about the pros and cons of social media and its effect on students doing well in school. Students with internet access at a rate of 50% have reported using social networking sites to discuss school work, and another 59% talk about instructive topics.

4-Better quality of life

If you want to talk about the pros and cons of social media, take a close look at all the support groups on Facebook. Members of these groups discuss their health conditions, share important information, and resources relevant to their conditions while creating strong support networks.

5-Social media as a source of employment

Job sourcing has gone modern thanks to social media. Sites such as LinkedIn are a major resource that 89% of job recruiters take advantage of when looking to hire potential employees.

Now let’s take a look at the Cons of social media

1-Social media and the news

Much of the news information that people read about comes from social media websites, and that figure estimate is around 27.8 %. This figure ranks just under print newspapers at 28.8%, greater than radio’s figure of 18.8% and far outpaces the figure for other print publications at just 6%.

2-Too much misinformation

With the advent of the web, people started to create their own websites and blogs. While many of those blogs were just basic diaries, a few of them were about topics like health and politics while others were how to blogs.

Many blogs have turned into rumor mills, spreading misinformation that people tend to believe just because it’s on the web.

Rumors about hurricane Sandy and gunfights in other countries like Mexico have been picked up by reliable news services, and this misinformation has been shared without the proper vetting of the sources providing the information.

3-Pupils spending too much time on social media sites have lower academic grades

Here is another argument about the pros and cons of social media as it pertains to students. Statistics show that pupils using social media too often tend to have GPA’s of 3.06 compared to GPA’s of 3.82 for pupils who don’t use social media.

An even scarier fact is that students who use social media tend to score 20 % lower on their test scores then their counterparts.

 

4-Social media sites to blame for lost productivity

Social media platforms like Facebook and Twitter are a direct cause for lost productivity at the workplace. In a survey 36 % of people said that social networking was the biggest waste of time in comparison to activities like fantasy football, shopping, and watching television.

5-Social media is the cause for less face to face communication

One last discussion about the pros and cons of social media is a lack of one on one communication. In a 2012 study families who reported spending less time with one another rose from a level of 8% in 2000 to 32% in 2011.

The study also reported that 32 % of the people in the survey either were texting or were on social media sites instead of communicating with each other during family gatherings.

Source : http://www.toptensocialmedia.com/social-media-social-buzz/10-pros-and-cons-of-social-media/

THERE’S ONE ADJECTIVE Facebook uses over and over to describe the kind of content it hopes to show you. Whether it’s about the stories that come up in your Newsfeed, or ads, or apps, there it is, front-and-center in a press releaseor nestled in an interview quote or headlined on a blog post. The word? “Relevant.”

Remember the irony of that the next time you wake up in the morning, open Facebook, and look at the handful of notifications that have, oh, somewhere between zero and negative one thousand things to do with you.

You’ve probably noticed recently that Facebook seems to have a real problem adhering to an appropriate notification volume, or defaulting to an acceptable definition of what exactly is important enough to warrant you be notified about it. Here’s what happens lately: You see you’ve got a notification and you get excited! Like Pavlov’s dog, Facebook has trained you to expect something interesting, something relevant to you specifically, when you click that little icon. But lately? It’s not an alert that someone has commented on your (admittedly stunning) new profile photo. No, it’s an alert about a post from a literal stranger with whom you’ve shared nothing but the decision to click “Join Group” in 2011. Other times, it’s a reminder that a four-year acquaintance is interested in attending an event “near you” sometime this week. Your excitement quickly becomes disappointment. There is no treat for Pavlov’s dog anymore, only reminders of distant acquaintances birthdays.

Over the last year and a half, Facebook has paid increasing attention to the notifications feature of its platform. It started in 2015 with the (now abandoned) decision to turn the notifications tab into a sort of all-in-one information hub. Around the same time, Facebook released its short-lived Notify app, which was shut down in June of this year, just seven months after it launched. But while the app itself was dissolved, Facebook made a point to mention that it had “learned a lot about how to make notifications as timely and relevant as possible” with Notify, and that it would be incorporating the functionality into other Facebook products. (Facebook did not respond to a request for comment by publication time.)

The result of all this seems to be that instead of getting a few notifications about your friends and family, you are now by default opted in to receiving many notifications from random people who are, at best, tangentially related to you.

If you think this is starting to sound a lot like spam, well, you’re not wrong. The good news is, you can turn these notifications off. The annoying thing is that they aren’t off by default for things like Facebook Groups already. But if you have five or ten minutes, here’s how to fix your notifications in Facebook’s settings.

How To Change Notifications for Facebook Groups


Click on the little downward-facing arrow on the top toolbar of your Facebook page, where you’ll see settings. Click it, and then click notifications in the left column.

On this page, Facebook lets you change the notification settings by device. So choose accordingly, depending on whether you want your notifications changed on mobile, desktop, or both.

desktop-settings

 

 

Under the “What You Get Notified About” section, you’ll see Group Activity. Click edit.

notified-about


A pop-up dialogue will appear with all the groups you’re a part of. Change the notification settings to either All Posts, Friends’ Posts, Highlights, or Off.

While most of the options are self explanatory, Highlights is not, and that’s likely the source of your problems, since it is selected by default in many cases. According to Facebook, choosing Highlights will notify you for “suggested posts” and posts by your friends—“suggested” of course being the semantic word-ball that got us into this mess.

Once you’ve chosen your notification preferences, click the X to exit. Your changes should automatically be saved.

And that’s it! Hopefully this will have solved much of your notification woes. Of course, you saw that Groups notifications aren’t the only thing you can change. If you’re annoyed by things like birthday and event reminders, or live video notifications, you can change those there, too.

Happy Interneting!

Source : http://www.wired.com/2016/08/change-spammy-group-notifications-facebook/ 

AOFIRS

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media