fbpx

Web Directories

Carol R. Venuti

Carol R. Venuti

Wednesday, 06 September 2017 01:06

Google reveals its most-searched ‘How To’ tips

It's easy to forget how difficult DIY repairs were just a couple of decades ago, considering how easy the internet makes it to fix very specific product problems. (My biggest personal victory was fixing a 50-inch LG plasma display that borked a week after the warranty expired, following some extensive Googling.) Now, Google has created a sitethat shows exactly what you want to fix, do and learn the most, based on where you live.

The need to to fix windows, walls and doors topped lists everywhere, so the team threw out those results to focus on regional patterns. The results? "North Americans and East Asians need their toilets, people in former Soviet countries are fearless enough to attempt fixing their own washing machines, [and] warmer climates can't live without a fridge," interactive visual journalist Xaquin G.V. writes.

Other top searches revolve around cooking, dating, money, dressing and health. For instance, many folks want to know how to boil an egg (maybe we're becoming too reliant on Google), impress a girl, write a check, tie a bow-tie, pick a lock, lose weight, gain weight and get rid of pimples.

Other items, like cooking asparagus, asking someone to the prom and losing weight tend to be seasonal, Google says. Others are viral, peaking and declining over a short period, with subjects like how to make slime and loom-bands.

The data was culled from searches by users on "How To" do and fix things, one of the most common terms used on Google. The site itself was created by Google's News Lab, working in conjunction with Xaquin. It has a rich, responsive design and works well on mobile, a priority for Google Labs experiments, Data Editor Simon Rogers told Techcrunch. The lab, Rogers said, is particularly interested in experimenting with data journalism as a way to tell or summarize interesting stories in new ways.

Source: This article was published finance.yahoo.com By Steve Dent

Mark Weiser predicted the Internet of Things in a seminal article in 1991 about how people would interact with networked computation distributed into the environments and artifacts around them.

Before the IoT moniker dominated, his vision of “ubiquitous computing” could take many names and flavors as factions tried to establish their own brand  (“Things That Think” at the Media Lab, “Project Oxygen” at MIT’s Lab for Computer Science, “Pervasive Computing,” “Ambient Computing,” “Invisible Computing,” “Disappearing Computer,” etc.), but it was all still rooted in Weiser’s “UbiComp.”

The Internet of Things assumes ubiquitous sensate environments. Without these, the cognitive engines of this everywhere-enabled world are deaf, dumb, and blind, and cannot respond relevantly to the real-world events that they aim to augment. And the last decade has seen a huge expansion in wireless sensing, which is having a deep impact in ubiquitous computing. Advances have been rampant and sensors of all sorts now seem to be increasingly in everything. A myriad of commercial products are appearing for collecting athletic data for a variety of sports that range from baseball to tennis, and even the amateur athlete of the future will be instrumented with wearables that will aid in automatic/self/augmented coaching. Sensors of various sorts have also crept into fabric and clothing and going beyond wearable systems are electronics that are attached directly to or even painted onto the skin.

 

 

In George Orwell’s 1984, it was the totalitarian Big Brother government who put the surveillance cameras on every television—but in the reality of nowadays, it is consumer electronics companies who build cameras into the common set-top box and every mobile handheld. Cameras are becoming commodity and they will become even more common as generically embedded sensors.

In the next years, as large video surfaces cost less and are better integrated with responsive networks, we will see the common deployment of pervasive interactive displays. Information coming to us will manifest in the most appropriate fashion (e.g., in your smart eyeglasses or on a nearby display)—the days of pulling your phone out of your pocket and running an app are severely limited.

Furthermore, the energy needed to sense and process has steadily declined—sensors and embedded sensor systems have taken full advantage of low-power electronics and smart power management. Similarly, energy harvesting, once an edgy curiosity, has become a mainstream drumbeat that is resonating throughout the embedded sensor community. And the dream of integrating harvester, power conditioning, sensor, processing, and perhaps wireless on a single chip nears reality.

Moore’s Law has democratized sensor technology enormously. Ever more sensors are now integrated into common products (witness mobile phones, which have become the Swiss Army Knives of the sensor/RF world), and the DIY movement has also enabled custom sensor modules to be easily purchased or fabricated through many online and crowd-sourced outlets. As a result, this decade has witnessed an explosion of real-time sensor data flowing into the network. This will surely continue in the following years, leaving us the grand challenge of synthesizing this information into many forms—for example, grand cloud-based context engines, virtual sensors, and augmentation of human perception. These advances not only promise to usher in true UbiComp, they also hint at radical redefinition of how we experience reality that will make today’s common attention-splitting between mobile phones and the real world look quaint and archaic.

We are entering a world where ubiquitous sensor information from our proximity will propagate up into various levels of what is now termed the “cloud” then project back down into our physical and logical vicinity as context to guide processes and applications manifesting around us.

Our relationship with computation will be much more intimate as we enter the age of wearables. Right now, all information is available on many devices around us at the touch of a finger or the enunciation of a phrase. Soon it will stream directly into our eyes and ears once we enter the age of wearables. This information will be driven by context and attention, not direct query, and much of it will be pre-cognitive, happening before we formulate direct questions. Indeed, the boundaries of the individual will be very blurry in this future. Humanity has pushed these edges since the dawn of society. Since sharing information with each other in oral history, the boundary of our mind expanded with writing and later the printing press, eliminating the need to mentally retain verbatim information and keep instead pointers into larger archives. In the future, where we will live and learn in a world deeply networked by wearables and eventually implantables, how our essence and individuality is brokered between organic neurons and whatever the information ecosystem becomes is a fascinating frontier that promises to redefine humanity.

Read the full article today!

Source: This article was published technologyreview.com By Joseph A. Paradiso

GOOGLE CHROME users will soon be getting a new update to download that could change the way they browser the internet forever.

Google Chrome fans will be able to download a new ad blocker update that lets them mute entire websites.

Within a few clicks, Google Chrome users will be able to mute adverts that automatically plays video or audio thanks to a brand new incoming feature.

Google’s Francois Beaufort took to Google+ to reveal the brand new feature that the Google Chrome team is working on.

In a screenshot he shared, you can see that you’ll be able to click ‘Info’ or ‘Secure’ label on the left of the URL you’re visiting to access the feature.

This will open a pop-up menu and in it will be a new Sound option that lets you mute any and all sounds from the particular website.

The feature, which was reported by 9to5Google, will be useful when visiting websites that automatically play videos.

Also this summer Google announced they would be launching the Google Chrome ad-blocking features in early 2018.

The upcoming Google Chrome feature won’t block every advert, but will block ones that are deemed unacceptable.

The group that decides this is known as the Coalition for Better Ads, which includes Google, Facebook, News Corp, and The Washington Post.

This includes things such as pop-up adverts and ads that expand on their own.

Describing the upcoming feature, Google said: “Chrome has always focused on giving you the best possible experience browsing the web. 

“For example, it prevents pop-ups in new tabs based on the fact that they are annoying. 

“In dialogue with the Coalition and other industry groups, we plan to have Chrome stop showing ads (including those owned or served by Google) on websites that are not compliant with the Better Ads Standards starting in early 2018.”

Google is also set to introduce an option for website visitors to pay websites directly – in compensation for the adverts they're blocking.

Dubbed Funding Choices, Google has been testing a similar feature for some time, but it hopes a next-generation of the model will be ready to roll-out alongside its blanket ban on adverts.

One set of websites that have been left fearing for their future in light of the upcoming Google Chrome ad-blocker is torrent sites.

Chrome is the world’s most popular browser, and the leading browser for many torrent websites.

The upcoming ad blocker is expected to have a big effect on torrent sites and the revenue they bring in.

The owner of one torrent site, who did not want to be named, told TorrentFreak that the Google Chrome ad blocker could signal the end of torrents.

They said: “The torrent site economy is in a bad state. Profits are very low. Profits are f***** compared to previous years.

“Chrome’s ad-blocker will kill torrent sites. If they don’t at least cover their costs, no one is going to use money out of his pocket to keep them alive.

“I won’t be able to do so at least.”

Source: This article was published express.co.uk By DION DASSANAYAKE

Using data from human "quality raters," Google hopes to teach its algorithms how to better spot offensive and often factually incorrect information

Google is undertaking a new effort to better identify content that is potentially upsetting or offensive to searchers. It hopes this will prevent such content from crowding out factual, accurate and trustworthy information in the top search results.

“We’re explicitly avoiding the term ‘fake news,’ because we think it is too vague,” said Paul Haahr, one of Google’s senior engineers who is involved with search quality. “Demonstrably inaccurate information, however, we want to target.”

New role for Google’s army of ‘quality raters’

The effort revolves around Google’s quality raters, over 10,000 contractors that Google uses worldwide to evaluate search results. These raters are given actual searches to conduct, drawn from real searches that Google sees. They then rate pages that appear in the top results as to how good those seem as answers.

Quality raters do not have the power to alter Google’s results directly. A rater marking a particular result as low quality will not cause that page to plunge in rankings. Instead, the data produced by quality raters is used to improve Google’s search algorithms generally. In time, that data might have an impact on low-quality pages that are spotted by raters, as well as on others that weren’t reviewed.

Quality raters use a set of guidelines that are nearly 200 pages long, instructing them on how to assess website quality and whether the results they review meet the needs of those who might search for particular queries.

The new ‘Upsetting-Offensive’ content flag

Those guidelines have been updated with an entirely new section about “Upsetting-Offensive” content that covers a new flag that’s been added for raters to use. Until now, pages could not be flagged by raters with this designation.

The guidelines say that upsetting or offensive content typically includes the following things (the bullet points below are quoted directly from the guide):

  • Content that promotes hate or violence against a group of people based on criteria including (but not limited to) race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, or veteran status.
  • Content with racial slurs or extremely offensive terminology.
  • Graphic violence, including animal cruelty or child abuse.
  • Explicit how­ to information about harmful activities (e.g., how tos on human trafficking or violent assault).
  • Other types of content which users in your locale would find extremely upsetting or offensive.

The guidelines also include examples. For instance, here’s one for a search on “holocaust history,” giving two different results that might have appeared and how to rate them:

The first result is from a white supremacist site. Raters are told it should be flagged as Upsetting-Offensive because many people would find Holocaust denial to be offensive.

The second result is from The History Channel. Raters are not told to flag this result as Upsetting-Offensive because it’s a “factually accurate source of historical information.”

In two other examples given, raters are instructed to flag a result said to falsely represent a scientific study in an offensive manner and a page that seems to exist solely to promote intolerance:

Being flagged is not an immediate demotion or a ban

What happens if content is flagged this way? Nothing immediate. The results that quality raters flag is used as “training data” for Google’s human coders who write search algorithms, as well as for its machine learning systems. Basically, content of this nature is used to help Google figure out how to automatically identify upsetting or offensive content in general.

In other words, being flagged as “Upsetting-Offensive” by a quality rater does not actually mean that a page or site will be identified this way in Google’s actual search engine. Instead, it’s data that Google uses so that its search algorithms can automatically spot pages generally that should be flagged.

If the algorithms themselves actually flag content, then that content is less likely to appear for searches where the intent is deemed to be about general learning. For example, someone searching for Holocaust information is less likely to run into Holocaust denial sites, if things go as Google intends.

Being flagged as Upsetting-Offensive does not mean such content won’t appear at all in Google. In cases where Google determines there’s an explicit desire to reach such content, it will still be delivered. For example, someone who is explicitly seeking a white supremacist site by name should get it, raters are instructed:

Those explicitly seeking offensive content will get factual information

What about searches where people might already have made their minds up about particular situations? For example, if someone who already doubts the Holocaust happened does a search on that topic, should that be viewed as an explicit search for material that supports it, even if that material is deemed upsetting or offensive?

The guidelines address this. It acknowledges that people may search for possibly upsetting or offensive topics. It takes the view that in all cases, the assumption should be toward returning trustworthy, factually accurate and credible information.

From the guidelines:

Remember that users of all ages, genders, races, and religions use search engines for a variety of needs. One especially important user need is exploring subjects which may be difficult to discuss in person. For example, some people may hesitate to ask what racial slurs mean. People may also want to understand why certain racially offensive statements are made. Giving users access to resources that help them understand racism, hatred, and other sensitive topics is beneficial to society.

When the user’s query seems to either ask for or tolerate potentially upsetting, offensive, or sensitive content, we will call the query a “Upsetting-­Offensive tolerant query”. For the purpose of Needs Met rating, please assume that users have a dominant educational/informational intent for Upsetting­-Offensive tolerant queries. All results should be rated on the Needs Met rating scale assuming a genuine educational/informational intent.

In particular, to receive a Highly Meets rating, informational results about Upsetting­-Offensive topics must:

  1. Be found on highly trustworthy, factually accurate, and credible sources, unless the query clearly indicates the user is seeking an alternative viewpoint.
  2. Address the specific topic of the query so that users can understand why it is upsetting or offensive and what the sensitivities involved are.

Important:

  • Do not assume that Upsetting-­Offensive tolerant queries “deserve” offensive results.
  • Do not assume Upsetting­-Offensive tolerant queries are issued by racist or “bad” people.
  • Do not assume users are merely seeking to validate an offensive or upsetting perspective.

It also gives some examples on interpreting searches for Upsetting-Offensive topics:

Will it work?

Google told Search Engine Land that has already been testing these new guidelines with a subset of its quality raters and used that data as part of a ranking change back in December. That was aimed at reducing offensive content that was appearing for searches such as “did the Holocaust happen.”

The results for that particular search have certainly improved. In part, the ranking change helped. In part, all the new content that appeared in response to outrage over those search results had an impact.

But beyond that, Google no longer returns a fake video of President Barack Obama purportedly saying he was born in Kenya, for a search on “obama born in kenya,” as it once did(unless you choose the “Videos” search option, where that fakery hosted on Google-owned YouTube remains the top result).

Similarly, a search for “Obama pledge of allegiance” is no longer topped by a fake news site saying he was banning the pledge, as was the previously case. That’s still in the top results but behind five articles debunking the claim.

Still, all’s not improved. A search for “white people are inbred” continues to have as its top result content that would almost certainly violate Google’s new guidelines.

“We will see how some of this works out. I’ll be honest. We’re learning as we go,” Haahr said, admitting that the effort won’t produce perfect results. But Google hopes it will be a big improvement. Haahr said quality raters have helped shape Google’s algorithms in other ways successfully and is confident they’ll help it improve in dealing with fake news and problematic results.

“We’ve been very pleased with what raters give us in general. We’ve only been able to improve ranking as much as we have over the years because we have this really strong rater program that gives us real feedback on what we’re doing,” he said.

In an increasingly charged political environment, it’s natural to wonder how raters will deal with content that’s easily found on major news sites that call both liberals and conservatives idiots or worse. Is this content that should be flagged as “Upsetting-Offensive?” Under the guidelines, no. That’s because political orientation is not one of the covered areas for this flag.

How about for non-offensive but nevertheless fake results, such as “who invented stairs” causing Google to list an answer saying they were invented in 1948?

Or a situation that plagues both Google and Bing, a fake story about someone who “invented” homework:

View image on TwitterView image on Twitter

Other changes to the guidelines might help with that, Google said, where raters are being directed to do more fact-checking of answers and effectively give sites more credit for being factually correct than seemingly being authoritative.

Source: This article was published searchengineland.com By Ginny Marvin

There have never been so many online learning resources, but that has a downside.

It's hard to overstate the vastness and confusion of the online learning ecosystem circa 2017.

It's a realm that extends from online mirrors of university classes and even whole degree programs to niche tutorial subscriptions like Angular University to pioneers like Coursera. As someone who's done it, just approaching the Google search bar with a topic of interest is unlikely to yield a tutorial or course or program that's really ideal for the learner. There are too many variables: time commitment, workload, cost, interactivity, length, skill-level, prestige, certification (if any). And this is on top of all of the usual confounding search engine noise.

Part of the problem when it comes to programming and development skills is that there are many skills subsets (or stacks) and to newcomers it's not always clear how to gain those skills in an optimal way. It's actually really easy to find an extremely suboptimal learning path, by, say, trying to muddle through a course out of your depth or by focusing on a skill that's heading for obsolescence.

Surely there are busloads of would-be programmers that have just been turned off by the messiness of the whole thing: programming languages, transpiled programming languages, transpilers, programming language frameworks, web frameworks, HTML, compiled HTML, CSS, SASS, APIs, Amazon Web Services, containers services, reactive programming, functional programming, imperative programming, object-oriented programming, WebStorm, Atom, Sublime Text, Vim, and on and on and on. I could try and tell you a right way of navigating all of the skill trees involved in web development (or other sorts of development), but even if I came up with an optimal learning path, this stuff is changing all the time. 

Enter Learn Anything. It's kind of a search engine. The basic idea is that you punch in a skillset you'd like to learn and it will return not a Google-like list of results, but a skill tree offering a clear way of navigating an optimized learning path. Included with that tree are links to curated learning resources. The content is all open-source and open to contributors, whose participation seems pretty neccessary to keeping Learn Anything useful.

Source: This article was published motherboard.vice.com By MICHAEL BYRNE

Stories could be built using Google AMP technology

Google is developing technology that would allow news publishers to build Snapchat-style stories that would live inside the company’s search engine, according to a report from The Wall Street Journal. The stories may resemble what publishers have in the past created for Snapchat’s Discover section, which mixes mobile-first design with a blend of photos, videos, and text. Google is said to call the product “Stamp,” with the “St” standing for stories. There is the possibility that it could live beneath the search bar, where on Android users are already served a list of recommended websites and news stories.

It’s unclear where exactly the feature or service would live, but the report says Google is building it around its AMP webpage tech. That would ensure that the stories, in whatever form they take, load fast, are uncluttered, and feature advertisements that Google serves and controls. The report says Google is already talking with CNN, The Washington PostTime, and Vox Media, among others. (Vox Media is the parent company of The Verge.) “Ever since the beginning of AMP we’ve constantly collaborated with publishers, and are working on many new features,” a Google spokesperson told the WSJ.

Although it may sound as if Google’s primary target here is the Snapchat demographic — that is true to an extent, as Snapchat-owned Snap Inc. continues gobbling up teen mindshare and an increasing fraction of web advertising spend — Facebook poses the larger threat to Google’s search business. Facebook’s Instant Articles feature, a competitor to AMP, may not be as successful as Google’s own webpage tech, but Facebook’s app-centric approach to controlling how information, news, and entertainment are disseminated on the internet poses an ongoing existential risk to Google’s web-based ad business. The more people who use Facebook’s app, the less people turn to Google search, the logic goes.

So both companies are fighting to preserve their platforms as the primary place users seek out and find information, with Facebook using its social network and Google using its search engine. Now, it appears Google wants to combat Facebook’s, and to a lesser extent Snap’s, grip on news and entertainment content by encouraging publishers to create their own stories for Google’s custom product. It’s not clear how revenue would be split, or whether Google would allow publishers to repost the custom stories on their own websites or on other platforms like Facebook. This does signal a move from Google to take a more active role in attracting more users for reasons unrelated to typing in a search query.

Source: This article was published theverge.com By Nick Statt

Tuesday, 18 July 2017 08:13

Amazon: Dependent On Search Engines?

Summary

Google (and other search engines) are key suppliers for Amazon.

Amazon.com receives approximately 353 million to 478 million visitors to its website per month, for free, from search engines.

These customers come to Amazon at a low cost and offer large returns.

When looking at new investment opportunities I try to analyze the potential risks of a business model as much as I do the potential opportunities of the product and market. This month, I’ve been researching and considering starting a long position in Amazon (AMZN). Amazon is a difficult business to analyze from a quantitative, qualitative, and even emotional perspective. From first glance, Amazon looks to be a relatively undefeatable position. Moreover, I am an avid Amazon customer for everything from paper towels to renting movies on my smart TV.

Anyway, analyzing the upside in Amazon’s growth story is very common here on Seeking Alpha. And over the last few years, every Amazon bull is counting a large pile of chips. However, I have not been long Amazon and have missed the 350% upside over the last 5 years. In my own analysis, I'm working to determine if Amazon still makes sense at this level and valuation. In this article, I want to highlight one of the business risks I discovered in my analysis.

According to SimiliarWeb, Amazon.com acquires about 24.44% of its desktop website traffic from search engines. From there, 91.09% of Amazon.com’s search engine traffic can be considered organic, and 8.91% can be considered paid. For reference, “organic” just means that Amazon.com is receiving these website visitors for free from the search engines.

When analyzing a business, I believe that key suppliers are an important risk to consider. And after looking at Amazon.com’s customer acquisition strategy, I believe that Google (and other search engines) are key suppliers for Amazon. Moreover, these search engines are key suppliers of free customers.

According to Ahrefs and SimiliarWeb, Amazon.com receives approximately 353 million to 478 million visitors to its website per month, for free, from search engines. Search Engine Journal reports that Google controls 85.82% of search engine market share. Doing some simple math, we could estimate that Amazon.com receives approximately 357 million visitors from Google’s search engine per month.

Source: This article was published seekingalpha

Osama bin Laden and his advisor Ayman al Zawahiri. Hamid Mir/wikipediaCC BY-SA 

What went though the mind of the suicide bomber Salman Abedi just before he blew himself up in Manchester this week, killing 22 people? We often dismiss terrorists as non-humans, monsters, at first. But when we learn that they were seemingly normal individuals with families and jobs, it’s hard not to wonder about how their minds really work.

The search for a terrorist “personality” or “mindset” dominated psychological research in the 1970s and 1980s and remains a significant area for research today. A new study published in Nature Human Behaviour, which assessed the cognitive and psychological profiles of 66 Colombian paramilitaries imprisoned for committing terrorist acts, now argues that poor moral reasoning is what defines terrorists.

The idea behind such research is obvious – it’s to identify stable, predictive traits or “markers” of terrorist personalities. If we could do that, we may be able to predict who will become a terrorist – and perhaps prevent it. But this type of research is viewed by many psychologists, myself included, with extreme caution. Researchers carrying out such studies typically use a myriad of psychometric measures, personality and IQ tests in various contexts. But there’s no consensus on how useful these tests are.

And even if we did manage to pin down terrorist markers, what would we do with this knowledge? Would we all be tested across our lifespan? What would happen if we had a marker?

Appeal case of mass murderer Anders Breivik. LISE AASERUD/EPA

The term “terrorist mindset” is also problematic because it fuels the notion that terrorists are abnormal, resulting in knee-jerk endeavours to uncover the abnormality. For psychologists, abnormal suggests presence of a disorder, deficit or illness which makes terrorists “sick” or different. This idea seems plausible because it helps us come to terms with extreme behaviour.

But terrorist atrocities are undoubtedly the end of a chain of events which only achieve significance with the benefit of hindsight. By focusing on the event itself, how the terrorist was behaving at that time or how he/she may have been thinking in the immediate run up, our understanding becomes distorted. This is because the process of becoming a terrorist has been overlooked.

Study on Colombian paramilitaries

Of course it’s not easy to get hold of terrorists prior to an attack. Most research therefore concerns terrorists that have been caught or are suspected terrorists. The new study did just this. Imprisoned Columbian paramilitaries completed a battery of social-cognitive tests, creating individual profiles – including assessments of moral cognition, IQ, executive functioning, aggressive behaviour and emotion recognition. They were then compared with 66 non-criminals.

The researchers found terrorists had higher levels of aggression and lower levels of emotion recognition than non-criminals. However, no differences were found between the groups for IQ or executive functioning. The biggest difference between the terrorists and the other group was seen in moral cognition – they found that terrorists are guided by an abnormal over-reliance on outcomes. The authors argue that this distorted moral reasoning – that the ends justify the means – is the “hallmark” of a terrorist mindset. They assessed moral judgement by asking participants to rate various stories according to levels of unjustified aggression.

Relatives of a victim killed in a Colombian conflict by guerrilla or paramilitaries between 1991 and 2008. EPA/Luis Eduardo Noriega

The results are intriguing and seem intuitive. But we cannot be sure that this profile wasn’t a result of their incarceration – we know that prison distorts cognition. If not, was it present from birth or did it develop in the run up to becoming part of a terrorist group?

These questions cannot be answered, yet they are fundamental. Headline statements from high-profile research of this nature can be misleading and counter-productive. Despite its appeal, there is no scientific support for the idea that terrorists are psychopaths or have a personality disorder. Often research is contradictory – some researchers argue that their findings show terrorists to be suicidal while others claim they are extrovert, unstable, uninhibitedaggressive, defensive or narcissistic.

In fact, psycho-pathological behaviours are more likely to conflict with a terrorist agenda than aid it – it after all relies on commitment, motivation and discipline.

The psychology of radicalisation

Many psychologists believe that the events which occur in the years before a terrorist attack, referred to as radicalisation, offer most in terms of trying to answer why a person might turn to political violence. However, the psychology of terrorism is not well advanced. There is little empirical evidence to support existing conceptual models – and they are often limited to particular extremist groups and ideologies.

More and more psychologists are now beginning to believe that a number of key psychological components are fundamental to the radicalisation process. These include motivation, group ideologies and social processes that encourage progressive distancing from former friends, for example. Rather than measuring to predict, we might be better off devoting resources to improve understanding of what motivates individuals to join the ranks of violent extremists. Is it the fundamental human need to matter that makes people seek out others who share their reality? Psychological evidence indicates the quest for significance may indeed be an important driver of extremist behaviour.

The so-called Islamic State (IS). Alibaba2k16/wikipediaCC BY-SA

However, it is clear that a number of complicated factors are directly and indirectly related to radicalisation. Personality and cognitive performance may change over time and therefore seem irrelevant for prediction purposes. But it is important to note that many in society are vulnerable to being manipulated and managed by terrorist groups to perform terrorist acts because of a cognitive impairment, disability or mental illness.

Accepting that prediction may never be possible because of the complex, evolving nature of terrorism might improve the nature of research in this domain. Quality psychological research aimed at searching for markers of the radicalisation process, such as changes in dress, behaviour and social circles – which appear to have been present in the case of Abedi and others – may be fruitful. Indeed de-radicalisation schemes are increasingly important in the fight against terrorism.

Luckily, the more we find out about terrorists’ quest for significance the better we can understand the identity and social issues that are fundamental to radicalisation. So there’s every reason to be optimistic that psychology can be a powerful tool in the fight against terrorism.

Source: This article was published on theconversation.com

This article was originally published on The Conversation. Read the original article.

They’re mysterious bursts of radio waves from space that are over in a fraction of a second. Fast Radio Bursts (FRBs) are thought to occur many thousands of times a day, but since their first detection by the Parkes radio telescope a decade ago only 30 have been observed.

But once the Australian Square Kilometre Array Pathfinder (ASKAP) joined the hunt, we had our first new FRB after just three and half days of observing. This was soon followed by a further two FRBs. And the telescope is not even fully operational yet.

The fact that ASKAP detects FRBs so readily means it is now poised to tackle the big questions.

One of these is what causes an FRB in the first place. They are variously attributed by hard-nosed and self-respecting physicists to everything from microwave ovens, to the accidental transmissions of extraterrestrials making their first baby steps in interstellar exploration.

The astounding properties of these FRBs have so enthralled astronomers that, in the decade since their discovery, there are more theories than observed bursts.

A distant flash

30 Doradus Nebula
Representational image. A star forming region of space called the 30 Doradus Nebula. Understanding where in the universe FRBs come from will help us answer fundamental questions about the cosmos.NASA/GETTY IMAGES

FRBs are remarkable because they are outrageously bright in the radio spectrum yet appear extremely distant. As far as astronomers can tell, they come from a long way away—halfway across the observable universe or more. Because of that, whatever makes FRBs must be pretty special, unlike anything astronomers have ever seen.

What has astronomers really excited is the fossil record imprinted on each burst by the matter it encounters during its multibillion-year crossing of the universe.

Matter in space exerts a tiny amount drag on the radio waves as they hurtle across the universe, like the air drags on a fast-moving plane. But here’s the handy bit: the longer the radio waves, the more the drag.

By the time the radio waves arrive at our telescopes, the shorter waves arrive just before the longer ones. By measuring the time delay between the short waves and the longer ones, astronomers can work out how much matter a given burst has travelled through on its journey from whatever made it, to our telescope.

If we can find enough bursts, we can work out how much ordinary matter—the stuff you and I and all visible matter is made of—exists in the universe, and tally up its mass.

The best guess so far is that we are missing roughly half of all the normal matter, with the rest lying in the vast voids between the galaxies—the very regions so readily probed by FRBs.

Are FRBs the weigh stations of the cosmos?

Difficult to find and harder to pinpoint

There are a few reasons why we still have so many questions about FRBs. First, they are tricky to find. It takes the Parkes telescope around two weeks of constant watching to find a burst.

Worse, even when you’ve found one, many radio telescopes like Parkes can only pinpoint its location in the sky to a region about the size of the full Moon. If you want to work out which galaxy an FRB came from, you have hundreds to choose from within that area.

The ideal FRB detector needs both a large field of view and the ability to pinpoint events to a region one thousandth the area of the Moon. Until recently, no such radio telescope existed.

A jewel in the desert

Now it does in ASKAP, a radio telescope being built by the CSIRO (Commonwealth Scientific and Industrial Research Organisation) in Murchison Shire, 370km (230 miles) northeast of Geraldton in Western Australia. It’s actually a network of 36 antennas, each 12 metres in diameter.

ASKAP
ASKAP antennas during fly’s-eye observing. All the antennas point in different directions.KIM STEELE (CURTIN UNIVERSITY), AUTHOR PROVIDED

ASKAP is a very special machine, because each antenna is equipped with an innovative CSIRO-designed receiver called a phased-array feed. While most radio telescopes see just one patch of sky at time, ASKAP’s phased-array feeds see 36 different patches of sky simultaneously. This is great for finding FRBs because the more sky you can see, the better chance you have of finding them.

To find lots of FRBs we need to cast an even wider net. Normally, ASKAP dishes all point in the same direction. This is great if you’re making images or want to find faint FRBs.

Thanks to recent evidence from Parkes, we realised there might be some super-bright FRBs too.

So we took a hint from nature. In the same way that the segments of a fly’s eye allow it to see all around it, we pointed all our antennas in lots of different directions. This fly’s-eye observing mode enabled us to see a total patch of sky about the size of 1,000 full Moons.

That’s how we discovered this new FRB within days of starting, and using just eight of ASKAP’s total of 36 antennas.

FRB ASKAP
Radio image of the sky where ASKAP found its first FRB. The blue circles are the 36 patches of the sky that ASKAP antenna number 5 (named Gagurla in the local Wadjarri language) was watching at the time the FRB was detected. The red smudge marks where the FRB came from. The black dots are galaxies, far, far, away. The full Moon is shown to scale, in the bottom corner.IAN HEYWOOD (CSIRO), AUTHOR PROVIDED

When fully operational

So far, in fly’s-eye mode we have made no attempt to combine the signals from all the antennas. ASKAP’s real party piece will be to point all the telescopes in the same direction and combine the signals from all the antennas.

This will give us a precise position for every single burst, enabling us to identify the host galaxy of each FRB and measure its exact distance.

Armed with this information, we will be able to activate our network of cosmic weigh stations. At this point we will be able to investigate a fundamental question that has been plaguing astronomers for more than 20 years: where is the missing matter in the universe?

Keith Bannister is an astronomer with CSIRO and Jean-Pierre Macquart is a senior lecturer in Astrophysics at Curtin University.

Source: This article was published on newsweek.com BY  AND 

The orbits of all seven Earth-size planets in the TRAPPIST-1 system are now known.

Astronomers have nailed down the path of TRAPPIST-1h, the outermost planet in the system, finding that this world takes just under 19 Earth days to complete one lap around its small, faint host star. 

The new result suggests that TRAPPIST-1h is too cold to host life as we know it, and it confirms that all seven TRAPPIST-1 worlds circle their star in a sort of gravitational lockstep with one another, study team members said. [Exoplanet Tour: Meet the 7 Earth-Size Planets of TRAPPIST-1]

"It's incredibly exciting that we're learning more about this planetary system elsewhere, especially about planet h, which we barely had information on until now," Thomas Zurbuchen, associate administrator of NASA's Science Mission Directorate at the agency's headquarters in Washington, D.C., said in a statement. 

The orbits of the seven planets around the star Trappist-1. The grey region is the zone, where liquid water could exist on the surface of the planets. On planet Trappist-1h liquid water is possible under a thick layer of ice. (1 AU is the distance between the Sun and the Earth.)
 
The orbits of the seven planets around the star Trappist-1. The grey region is the zone, where liquid water could exist on the surface of the planets. On planet Trappist-1h liquid water is possible under a thick layer of ice. (1 AU is the distance between the Sun and the Earth.)

Credit: A. Triaud

"This finding is a great example of how the scientific community is unleashing the power of complementary data from our different missions to make such fascinating discoveries," Zurbuchen added.

TRAPPIST-1 is a dim, dwarf star just 8 percent as massive as the sun that lies about 40 light-years from Earth. In May 2016, astronomers using the TRAPPIST (Transiting Planets and Planetesimals Small Telescope) instrument in Chile announced the discovery of three roughly Earth-size planets in the system. That number jumped to seven with further observation by NASA's Spitzer Space Telescope, TRAPPIST and other ground-based telescopes.

Three of these seven worlds appear to orbit in TRAPPIST-1's "habitable zone," meaning they might be able to host liquid water, and therefore life as we know it, on their surfaces.

Despite such work, astronomers had not been able to pin down the path of TRAPPIST-1h. But they had noticed that the six other planets in the system are in "orbital resonance." That is, the worlds have tugged each other into stable orbits whose periods are related to each other by a ratio of two small integers.

Despite such work, astronomers had not been able to pin down the path of TRAPPIST-1h. But they had noticed that the six other planets in the system are in "orbital resonance." That is, the worlds have tugged each other into stable orbits whose periods are related to each other by a ratio of two small integers.

This illustration shows an artist view of the seven Trappist-1 planets
 
This illustration shows an artist view of the seven Trappist-1 planets

Credit: NASA/JPL-Caltech

Similarly, the Jupiter moons Io, Europa and Ganymede are in orbital resonance: For every lap Ganymede completes around Jupiter, Europa makes two orbits and Io completes four. The TRAPPIST-1 resonances are much more complex, but they adhere to the same principle. 

The six planets' relationships with each other led the research team to propose six possible resonant orbits for TRAPPIST-1h. Various observations ruled out five of the six, but the sixth was confirmed with observations made by NASA's Kepler space telescope from December 2016 through March of this year, the scientists announced in the new study, which was published Monday (May 22) in the journal Nature Astronomy.

"The resonant structure is no coincidence and points to an interesting dynamical history in which the planets likely migrated inward in lockstep," lead author Rodrigo Luger, a doctoral student at the University of Washington in Seattle, said in the same statement. "This makes the system a great test bed for planet-formation and -migration theories."

TRAPPIST-1 holds the record for most planets found in orbital resonance. Second place is a tie between the exoplanetary systems Kepler-80 and Kepler-223, each of which is known to harbor four resonant worlds.

TRAPPIST-1h receives about the same amount of energy from its star as the dwarf planet Ceres, the largest object in the main asteroid belt between Mars and Jupiter, gets from Earth's sun, NASA officials said. So TRAPPIST-1h is most likely a frigid world unable to host Earth-like life, they added.

But that may not always have been the case. The star TRAPPIST-1 is thought to be between 3 billion and 8 billion years old. It was likely much brighter in its youth, perhaps bright enough to make TRAPPIST-1 habitable for several hundred million years in the ancient past, Luger said. 

Source: This article was published on space.com


 

Page 3 of 7

AOFIRS

World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.