Source: This article was Published - Contributed by Member: Clara Johnson

When it comes to searching for niche-specific content Google search engine is not the best option out there. Although Google can be a good starting point from which you can delve deeper into the content area you are searching for but you can save much more time by using content-specific search engines. In today’s post, we are sharing with you some examples of academic search engines student researchers and teachers can use to search for, find and access scholarly content. We are only featuring the most popular titles, but you can always find other options to add to the list. From Google Scholar to June, these search engines can make a whole difference in your academic search. Check them out and share with us your feedback.

Some of The Best Academic Search Engines for Teachers and Student Researchers

Published in Search Engine

Source: This article was published - Contributed by Member: Logan Hochstetler

As scientific datasets increase in both size and complexity, the ability to label, filter and search this deluge of information has become a laborious, time-consuming and sometimes impossible task, without the help of automated tools.

With this in mind, a team of researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley is developing innovative machine learning tools to pull contextual information from scientific datasets and automatically generate metadata tags for each file. Scientists can then search these files via a web-based search engine for scientific , called Science Search, that the Berkeley team is building.

As a proof-of-concept, the team is working with staff at the Department of Energy's (DOE) Molecular Foundry, located at Berkeley Lab, to demonstrate the concepts of Science Search on the images captured by the facility's instruments. A beta version of the platform has been made available to Foundry researchers.

"A tool like Science Search has the potential to revolutionize our research," says Colin Ophus, a Molecular Foundry research scientist within the National Center for Electron Microscopy (NCEM) and Science Search Collaborator. "We are a taxpayer-funded National User Facility, and we would like to make all of the data widely available, rather than the small number of images chosen for publication. However, today, most of the data that is collected here only really gets looked at by a handful of people—the data producers, including the PI (principal investigator), their postdocs or graduate students—because there is currently no easy way to sift through and share the data. By making this raw data easily searchable and shareable, via the Internet, Science Search could open this reservoir of 'dark data' to all scientists and maximize our facility's scientific impact."

The Challenges of Searching Science Data

Today, search engines are ubiquitously used to find information on the Internet but searching  data presents a different set of challenges. For example, Google's algorithm relies on more than 200 clues to achieve an effective search. These clues can come in the form of keywords on a webpage, metadata in images or audience feedback from billions of people when they click on the information they are looking for. In contrast, scientific data comes in many forms that are radically different than an average web page, requires context that is specific to the science and often also lacks the metadata to provide context that is required for effective searches.

At National User Facilities like the Molecular Foundry, researchers from all over the world apply for time and then travel to Berkeley to use extremely specialized instruments free of charge. Ophus notes that the current cameras on microscopes at the Foundry can collect up to a terabyte of data in under 10 minutes. Users then need to manually sift through this data to find quality images with "good resolution" and save that information on a secure shared file system, like Dropbox, or on an external hard drive that they eventually take home with them to analyze.

Oftentimes, the researchers that come to the Molecular Foundry only have a couple of days to collect their data. Because it is very tedious and time-consuming to manually add notes to terabytes of scientific data and there is no standard for doing it, most researchers just type shorthand descriptions in the filename. This might make sense to the person saving the file but often doesn't make much sense to anyone else.

"The lack of real metadata labels eventually causes problems when the scientist tries to find the data later or attempts to share it with others," says Lavanya Ramakrishnan, a staff scientist in Berkeley Lab's Computational Research Division (CRD) and co-principal investigator of the Science Search project. "But with machine-learning techniques, we can have computers help with what is laborious for the users, including adding tags to the data. Then we can use those tags to effectively search the data."

To address the metadata issue, the Berkeley Lab team uses machine-learning techniques to mine the "science ecosystem"—including instrument timestamps, facility user logs, scientific proposals, publications and file system structures—for contextual information. The collective information from these sources including the timestamp of the experiment, notes about the resolution and filter used and the user's request for time, all provide critical contextual information. The Berkeley lab team has put together an innovative software stack that uses machine-learning techniques including natural language processing pull contextual keywords about the scientific experiment and automatically create metadata tags for the data.

For the proof-of-concept, Ophus shared data from the Molecular Foundry's TEAM 1 electron microscope at NCEM that was recently collected by the facility staff, with the Science Search Team. He also volunteered to label a few thousand images to give the machine-learning tools some labels from which to start learning. While this is a good start, Science Search co-principal investigator Gunther Weber notes that most successful machine-learning applications typically require significantly more data and feedback to deliver better results. For example, in the case of search engines like Google, Weber notes that training datasets are created and machine-learning techniques are validated when billions of people around the world verify their identity by clicking on all the images with street signs or storefronts after typing in their passwords, or on Facebook when they're tagging their friends in an image.

Berkeley Lab researchers use machine learning to search science data
This screen capture of the Science Search interface shows how users can easily validate metadata tags that have been generated via machine learning or add information that hasn't already been captured. Credit: Gonzalo Rodrigo, Berkeley Lab

"In the case of science data only a handful of domain experts can create training sets and validate machine-learning techniques, so one of the big ongoing problems we face is an extremely small number of training sets," says Weber, who is also a staff scientist in Berkeley Lab's CRD.

To overcome this challenge, the Berkeley Lab researchers used to transfer learning to limit the degrees of freedom, or parameter counts, on their convolutional neural networks (CNNs). Transfer learning is a machine learning method in which a model developed for a task is reused as the starting point for a model on a second task, which allows the user to get more accurate results from a smaller training set. In the case of the TEAM I microscope, the data produced contains information about which operation mode the instrument was in at the time of collection. With that information, Weber was able to train the neural network on that classification so it could generate that mode of operation label automatically. He then froze that convolutional layer of the network, which meant he'd only have to retrain the densely connected layers. This approach effectively reduces the number of parameters on the CNN, allowing the team to get some meaningful results from their limited training data.

Machine Learning to Mine the Scientific Ecosystem

In addition to generating metadata tags through training datasets, the Berkeley Lab team also developed tools that use machine-learning techniques for mining the science ecosystem for data context. For example, the data ingest module can look at a multitude of information sources from the scientific ecosystem—including instrument timestamps, user logs, proposals, and publications—and identify commonalities. Tools developed at Berkeley Lab that uses natural language-processing methods can then identify and rank words that give context to the data and facilitate meaningful results for users later on. The user will see something similar to the results page of an Internet search, where content with the most text matching the user's search words will appear higher on the page. The system also learns from user queries and the search results they click on.

Because scientific instruments are generating an ever-growing body of data, all aspects of the Berkeley team's science search engine needed to be scalable to keep pace with the rate and scale of the data volumes being produced. The team achieved this by setting up their system in a Spin instance on the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC). Spin is a Docker-based edge-services technology developed at NERSC that can access the facility's high-performance computing systems and storage on the back end.

"One of the reasons it is possible for us to build a tool like Science Search is our access to resources at NERSC," says Gonzalo Rodrigo, a Berkeley Lab postdoctoral researcher who is working on the natural language processing and infrastructure challenges in Science Search. "We have to store, analyze and retrieve really large datasets, and it is useful to have access to a supercomputing facility to do the heavy lifting for these tasks. NERSC's Spin is a great platform to run our search engine that is a user-facing application that requires access to large datasets and analytical data that can only be stored on large supercomputing storage systems."

An Interface for Validating and Searching Data

When the Berkeley Lab team developed the interface for users to interact with their system, they knew that it would have to accomplish a couple of objectives, including effective search and allowing human input to the machine learning models. Because the system relies on domain experts to help generate the training data and validate the machine-learning model output, the interface needed to facilitate that.

"The tagging interface that we developed displays the original data and metadata available, as well as any machine-generated tags we have so far. Expert users then can browse the data and create new tags and review any machine-generated tags for accuracy," says Matt Henderson, who is a Computer Systems Engineer in CRD and leads the user interface development effort.

To facilitate an effective search for users based on available information, the team's search interface provides a query mechanism for available files, proposals and papers that the Berkeley-developed machine-learning tools have parsed and extracted tags from. Each listed search result item represents a summary of that data, with a more detailed secondary view available, including information on tags that matched this item. The team is currently exploring how to best incorporate user feedback to improve the models and tags.

"Having the ability to explore datasets is important for scientific breakthroughs, and this is the first time that anything like Science Search has been attempted," says Ramakrishnan. "Our ultimate vision is to build the foundation that will eventually support a 'Google' for scientific data, where researchers can even  distributed datasets. Our current work provides the foundation needed to get to that ambitious vision."

"Berkeley Lab is really an ideal place to build a tool like Science Search because we have a number of user facilities, like the Molecular Foundry, that has decades worth of data that would provide even more value to the scientific community if the data could be searched and shared," adds Katie Antypas, who is the principal investigator of Science Search and head of NERSC's Data Department. "Plus we have great access to machine-learning expertise in the Berkeley Lab Computing Sciences Area as well as HPC resources at NERSC in order to build these capabilities."

Published in Online Research

The academic world is supposed to be a bright-lit landscape of independent research pushing back the frontiers of knowledge to benefit humanity.

Years of fingernail-flicking test tubes have paid off by finding the elixir of life. Now comes the hard stuff: telling the world through a respected international journal staffed by sceptics.

After drafting and deleting, adding and revising, the precious discovery has to undergo the ritual of peer-reviews. Only then may your wisdom arouse gasps of envy and nods of respect in the world’s labs and lecture theatres.

The goal is to score hits on the international SCOPUS database (69 million records, 36,000 titles – and rising as you read) of peer-reviewed journals. If the paper is much cited, the author’s CV and job prospects should glow.

SCOPUS is run by Dutch publisher Elsevier for profit.

It’s a tough track up the academic mountain; surely there are easier paths paved by publishers keen to help?

Indeed – but beware. The 148-year old British multidisciplinary weekly Nature calls them “predatory journals” luring naive young graduates desperate for recognition.

‘Careful checking’

“These journals say: ‘Give us your money and we’ll publish your paper’,” says Professor David Robie of New Zealand’s Auckland University of Technology. “They’ve eroded the trust and credibility of the established journals. Although easily picked by careful checking, new academics should still be wary.”

Shams have been exposed by getting journals to print gobbledygook papers by fictitious authors. One famous sting reported by Nature had a Dr. Anna O Szust being offered journal space if she paid. “Oszust” is Polish for “a fraud”.

Dr Robie heads AUT’s Pacific Media Centre, which publishes the Pacific Journalism Review, now in its 23rd year. During November he was at Gadjah Mada University (UGM) in Yogyakarta, Central Java, helping his Indonesian colleagues boost their skills and lift their university’s reputation.

The quality of Indonesian learning at all levels is embarrassingly poor for a nation of 260 million spending 20 percent of its budget on education.

The international ranking systems are a dog’s breakfast, but only UGM, the University of Indonesia and the Bandung Institute of Technology just make the tail end of the Times Higher Education world’s top 1000.

There are around 3500 “universities” in Indonesia; most are private. UGM is public.

UGM has been trying to better itself by sending staff to Auckland, New Zealand, and Munich, Germany, to look at vocational education and master new teaching strategies.

Investigative journalism

Dr. Robie was invited to Yogyakarta through the World Class Professor (WCP) programme, an Indonesian government initiative to raise standards by learning from the best.

Dr. Robie lectured on “developing investigative journalism in the post-truth era,” researching marine disasters and climate change. He also ran workshops on managing international journals.

During a break at UGM, he told Strategic Review that open access – meaning no charges made to authors and readers – was a tool to break the user-pays model.

AUT is one of several universities to start bucking the international trend to corral knowledge and muster millions. The big publishers reportedly make up to 40 percent profit – much of it from library subscriptions.

Prof-David-Robie-being-presented-with-UGM-koha-400wide Researchers - AOFIRS

Pacific Journalism Review’s Dr. David Robie being presented with a model of Universitas Gadjah Mada’s historic main building for the Pacific Media Centre at the editor's workshop in Yogyakarta, Indonesia.

According to a report by AUT digital librarians Luqman Hayes and Shari Hearne, there are now more than 100,000 scholarly journals in the world put out by 3000 publishers; the number is rocketing so fast library budgets have been swept away in the slipstream.

In 2016, Hayes and his colleagues established Tuwhera (Māori for “be open”) to help graduates and academics liberate their work by hosting accredited and refereed journals at no cost.

The service includes training on editing, presentation and creating websites, which look modern and appealing. Tuwhera is now being offered to UGM – but Indonesian universities have to lift their game.

Language an issue
The issue is language and it’s a problem, according to Dr. Vissia Ita Yulianto, researcher at UGM’s Southeast Asian Social Studies Centre (CESASS) and a co-editor of IKAT research journal. Educated in Germany she has been working with Dr. Robie to develop journals and ensure they are top quality.

“We have very intelligent scholars in Indonesia but they may not be able to always meet the presentation levels required,” she said.

“In the future, I hope we’ll be able to publish in Indonesian; I wish it wasn’t so, but right now we ask for papers in English.”

Bahasa Indonesia, originally trade Malay, is the official language. It was introduced to unify the archipelagic nation with more than 300 indigenous tongues. Outside Indonesia and Malaysia it is rarely heard.

English is widely taught, although not always well. Adrian Vickers, professor of Southeast Asian Studies at Sydney University, has written that “the low standard of English remains one of the biggest barriers against Indonesia being internationally competitive.

“… in academia, few lecturers, let alone students, can communicate effectively in English, meaning that writing of books and journal articles for international audiences is almost impossible.”

Though the commercial publishers still dominate there are now almost 10,000 open-access peer-reviewed journals on the internet.

“Tuwhera has enhanced global access to specialist research in ways that could not previously have happened,” says Dr Robie. “We can also learn much from Indonesia and one of the best ways is through exchange programmes.”

This article was first published in Strategic Review and is republished with the author Duncan Graham’s permission. Graham blogs at

Published in How to

The new A.I. system could soon make its way onto your smartphone.

Your phone might someday save your skin.

Stanford researchers say they've created a new artificial intelligence system that can identify skin cancer as well as trained doctors can. According to a study they published in science journal Nature, the program was able to distinguish between cancerous moles and harmless ones with more than 90 percent accuracy.

The researchers trained the system by feeding it nearly 130,000 images of moles and lesions, with some of them being cancerous. The system scanned the images pixel by pixel, identifying characteristics that helped it make each diagnosis. Using machine learning, the A.I. grew more accurate as it studied more samples.

It then went head to head with 21 trained dermatologists. The result: The A.I. software achieved "performance on par with all tested experts." The system correctly identified 96 percent of the malignant samples, and 90 percent of the (generally harmless) benign ones. For the doctors in the study, those numbers were 95 percent and 76 percent, respectively.

This could have huge implications: The study points out that 5.4 million new cases of skin cancer are diagnosed each year in the U.S. alone. If installed in smartphones, the authors say, this technology could provide a simple, low cost form of early detection.

Identifying melanoma early on is critical. The five-year survival rate when the cancer is caught in its earliest stages is 99 percent. That number drops to 14 percent when detected in its late stages. Having the equivalent of a dermatologist--as far as diagnosing goes--in your pocket could help patients keep a closer watch on their own skin and seek medical treatment sooner.

That's not to say dermatologists will be replaced--they'd still be the ones to perform any procedures necessary. And in a blog post on Stanford's website, the authors suggest doctors might use the tool for in-office diagnoses.

Before the system can achieve its potential, though, it will have to be able to detect cancer from images captured by smartphones. While phone cameras are rapidly improving, the A.I. is currently trained to work only with high quality medical images.

Still, the technology is moving in that direction. Being able to detect early could have an impact on the 10,000 people who die from skin cancer each year in the U.S. alone.

The Stanford researchers developed the framework for the A.I. system using an image classification algorithm that had previously been built by Google.

Source: This article was published By Kevin J. Ryan

Published in Online Research

You, too, could become a troll. Not a mythological creature that hides under bridges, but one of those annoying people who post disruptive messages in internet discussion groups – "trolling" for attention—and off-topic posters who throw out racist, sexist or politically controversial rants. The term has come to be applied to posters who use offensive language, harass other posters and generally conjure up the image of an ugly, deformed beast.

It has been assumed that trolls are just naturally nasty people being themselves online, but according to Cornell research, what makes a troll is a combination of a bad mood and the bad example of other trolls.

"While prior work suggests that trolling behavior is confined to a vocal and anti-social minority, ordinary people can, under the right circumstances, behave like trolls," said Cristian Danescu-Niculescu-Mizil, assistant professor of information science. He and his colleagues actually caused that to happen in an online experiment.

They described their research at the 20th ACM Conference on Computer-Supported Cooperative Work and Social Computing, Feb. 25–March 1 in Portland, Oregon, where they received the Best Paper Award. The team included Stanford University computer science professors Michael Bernstein and Jure Leskovec, and their doctoral student Justin Cheng '12.

To tease out possible causes of trolling, the researchers set up an online experiment. Through the Amazon Mechanical Turk service, where people can be hired to perform online tasks for a small hourly payment, they recruited people to participate in a discussion group about current events.

Participants were first given a quiz consisting of logic, math and word problems, then shown a news item and invited to comment. To compare the effects of positive and negative mood, some participants were given harder questions or were told afterward that they had performed poorly on the quiz. To compare the effects of exposure to other trolls, some were led into discussions already seeded with real troll posts copied from comments on The experiment showed that  and bad example could lead to offensive posting.

Following up, the researchers reviewed 16 million posts on, noting which posts were flagged by moderators, and applying computer text analysis and human review of samples to confirm that these qualified as trolling. They found that as the number of flagged posts among the first four posts in a discussion increases, the probability that the fifth post is also flagged increases. Even if only one of the first four posts was flagged, the fifth post was more likely to be flagged. This study was described in a separate paper, "Antisocial Behavior in Online Discussion Communities," presented at the Ninth International AAAI Conference on Web and Social Media, May 2015 at Oxford University.

Using day and time as a stand-in for mood, they found that ordinary posters were more likely to troll late at night, and more likely on Monday than Friday.

It might be possible to build some troll reduction into the design of discussion groups, the researchers propose. A person likely to start trolling could be identified based on recent participation in discussions where they might have been involved in heated debate. Mood can be inferred from keystroke movements. In these and other cases a time limit on new postings might allow for cooling off. Moderators could remove troll posts to limit contagion. Allowing users to retract posts may help, they added, as would reducing other sources of user frustration, such as poor interface design or slow loading times.

The point of their research, the researchers conclude, is to show that not all trolling is done by inherently anti-social people, so looking at the whole situation may better reflect the reality of how trolling occurs, and perhaps help us see less of it.

Source: This article was published By Bill Steele

Published in Internet Privacy

The average sex life appears to be dwindling - and it may reflect some troubling anxieties at the heart of modern society, says Simon Copland.

We live in one of the most sexually liberated times of human history. Access to new technologies over the past 40 years, whether it is the contraceptive pill, or dating apps such as Grindr and Tinder, have opened a new world of possibilities. As the sexual revolution of the 1970s matured, societal norms shifted with it, with increasing acceptance of homosexuality, divorce, pre-marital sex, and alternative relationships such as polyamory and swinging.

Despite this, research suggests that we’re actually having less sex now than we have for decades.

On average, Americans had sex nine fewer times per year in the early 2010s compared to the late 1990s – a 15% drop

In March, American researchers Jean Twenge, Ryne Sherman and Brooke Wells published an article in the Archives of Sexual Behavior showing that Americans were having sex on average nine fewer times per year in the early 2010s compared to the late 1990s – a 15% drop from 62 times a year to just 53. The declines were similar across gender, race, region, educational level and work status, with married people reporting the most significant drops.

While it could be easy to dismiss this as a one-off, or a symptom of the challenges of researching people’s sex lives, this is another point in a growing trend across the world. In 2013, the National Survey of Sexual Attitudes and Lifestyles (Natsal) found that British people between ages 16 and 44 had sex just under five times per month. This was a drop from the previous survey, released in 2000, where men were recorded to have sex 6.2 times a month, and women 6.3 times. In 2014 the Australian National Survey of Sexual Activity showed that people in heterosexual relationships were having sex on average 1.4 times per week, down from 1.8 times 10 years earlier. The situation is perhaps most severe in Japan, where recent data has shown that 46% of women and 25% of men between the ages of 16 and 25 ‘despise’ sexual contact.

Why is this happening?

While there are many simple conclusions available, BBC Future dug deeper and found a situation that is quite complex.

Porn blame

An easy first conclusion to make is that increased access to technology is to blame. Two technologies are usually targeted: online pornography and social media.

With the growth of online pornography, researchers have focused on its addictive potential, with some trying to label ‘internet sex addiction’ as an official psychiatric disorder. As an addiction, it is argued that porn acts as a replacement for real-life sex, limiting our sexual desire in the bedroom.


Social media and pornography are often blamed for damaging our sex lives - yet the evidence is far from clear cut (Credit: Alamy)

Porn is also blamed for its unrealistic imagery, with researchers arguing this can create symptoms such as ‘sexual anorexia’, or ‘porn induced sexual dysfunction’. In 2011, a survey of 28,000 porn viewers in Italy found that many engaged in an “excessive” consumption of porn sites. The daily use of porn, researcher Carlo Foresta argued, means that these people became inured to “even the most violent” images. According to this theory, these unrealistic images found in porn make it difficult for men in particular to get aroused when encountering the real thing, resulting in them becoming ‘hopeless’ in the bedroom.

Some researchers have even argued there is a link between porn and marriage rates. In a study in 2014, researchers Michael Malcolm and George Naufal surveyed 1,500 participants in the United States to analyse how 18 to 35 year-olds used the internet, and what impact this had on their romantic lives. The results, published in the Eastern Economic Journal, found a strong correlation between high levels of internet use and low marriage rates, a factor that was even more significant for men who viewed online pornography on a regular basis.

And it’s not just pornography. Social media in particular has been blamed as a distraction, with people obsessing over their screens instead of their sexual lives. This is an extension of research that previously suggested having a TV in a couple's bedroom significantly reduces sexual activity. It would make sense that the intrusion of social media devices into all aspects of our lives could have a similar effect. 

But there are good reasons to question both of these conclusions. Researchers are split on the impact of pornography on our sexual lives, with many debating the existence of ‘internet sex addiction’ in the first place. Others have noted the potential for pornography to enhance sexual activity. For example, in 2015 an article in the journal Sexual Medicine found that watching at least 40 minutes of porn at least twice a week boosted people’s libido and desire to have sex. This study tested the libido of 280 men measured against their use of pornography. The research found a strong correlation between the amount of time spent viewing porn and the desire to have sex, with those who watched over two hours of porn per week having the highest levels of arousal. These results were noted as well by Twenge, Sherman and Wells in their research, who, despite finding overall drops in sexual activity, found no difference in sexual activity amongst those who frequently watched pornography.


Dating apps should make it easier than ever to find a sexual partner - yet millennials appear to be having less sex than previous generations (Credit: Getty Images)

The same can be said for social media. While social devices can certainly provide a distraction they also provide increasing avenues to access ‘sex on tap’. In fact research has shown that apps such as Grindr and Tinder may speed up people's sexual lives, enabling sex on dates earlier and more regularly.

While technology definitely impacts our sexual lives, it cannot be blamed solely for the noted reductions in sexual activity.

Chained to the desk

Despite early dreams of a population liberated from work, our jobs seems to be intruding even further into our lives. Work hours remain extremely high across the Western world, with data recently showing that the average full-time employee in the US works 47 hours per week. It may seem logical to conclude that the fatigue and stress of work may lead to drops in sexual activity.

However it is not quite as simple as that. In 1998 for example Janet Hyde, John DeLamater and Erri Hewitt found in their research, published in the Journal of Family Psychology, that there was no reduction in sexual activity, satisfaction or desire between women who were homemakers and women who were employed either part-time or full-time. Contrary to the rest of their findings, Twenge, Sherman and Wells actually found that a busy work life correlated with higher sexual frequency.

Life in the fast lane can leave people feeling anxious, exhausted, and depressed - all of which may take a toll on their sex lives (Credit: Alamy)

But that does not mean work does not have an impact; instead, it’s the quality, rather than the quantity, of our work that matters. Having a bad job can be worse for your mental health than having no job, and this extends to our sexual lives as well. Stress in particular is increasingly being seen as the core indicator of drops in sexual activity and sexual happiness.

In 2010, for example, Guy Bodenmenn at the University of Zurich and his research team studied one hundred and three female students in Switzerland across a three month period, finding that higher self-reported stress was associated with lower levels of sexual activity and satisfaction. There are multiple impacts of stress, including changing hormone levels, contributing to negative body image, making us question relationships and partners, and increasing levels of drug and alcohol use. All of these have correlations between drops in sexual activity and sexual drive.

It’s about modern life

There are many other reasons to think that changes in our mental health and wellbeing may be damaging our sex lives. While Twenge, Sherman and Wells discounted both pornography use and work hours as causes behind the drops in sexual activity, the researchers argued the drops may be due to increasing levels of unhappiness. Western societies in particular have seen a mental health epidemic in the past few decades, focused primarily around depression and anxiety disorders.

There is a strong correlation between depressive symptoms and reduction in sexual activity and desire. Conducting a review of relevant studies for the Journal of Sexual Medicine, Evan Atlantis and Thomas Sullivan at the University of Adelaide found significant evidence that depression leads to increases in sexual dysfunction and reductions in sexual desire. Bringing this evidence together with the noted increases mental health issues, Twenge, Sherman and Wells argue there is a causal link between drops in happiness and average drops in sexual activity.

Research connects these mental health epidemics with the increasingly insecure nature of modern life, particularly for younger generations. It is this generation that has shown the highest drops in sexual activity, with research from Jean Twenge showing millennials are reporting having fewer sexual encounters than either Generation X or the baby boomers did at the same age. Job and housing insecurity, the fear of climate change, and the destruction of communal spaces and social life, have all been found to connect to mental health problems. 

A mixture of work, insecurity and technology is leading us all to feel slightly less aroused

Drops in sexual activity could be argued, therefore, to reflect the nature of modern life. This phenomenon cannot be equated with one problem or another, but is in fact the culmination of many things. It is the creation of the stresses of modern life – a mixture of work, insecurity and technology.

Diagnoses of depression and anxiety have continued to rise during the last decade (Credit: Alamy)

Some may celebrate drops in sexual activity as a rejection of loosening sexual mores. But sex is important. It increases happinessmakes you healthier, and even makes you more satisfied at work. Most importantly, for the vast majority of people, sex is fun. 

It is for these reasons that people around the world are trying to find ways to deal with this issue. In February this year Per-Erik Muskos, a councilman from the town of Overtornea in Sweden, introduced a proposal to provide the municipality’s 550 employees a subsidised hour each week to go home and have sex. Muskos talked up the benefits of sex, saying his proposal could “be an opportunity for couples to have their own time, only for each other.”

Japan has been trying to deal with this issue for a long time, particularly over fears of a plummeting birth rate. Parents in Japan are now being provided cash for having children, while for years companies have been encouraged to give citizens more time off work to procreate. This has involved one of the country’s large economic organisations, Keidanren, encouraging its 1,600 corporate members to allow their employees to spend more time with their families. Meanwhile, local authorities have encouraged procreation through a range of measures, including providing shopping vouchers to larger families and launching government-sanctioned matchmaker websites. The Australian government pursued something similar for many years, providing a ‘baby bonus’ to new parents up until 2014.

The problem with these proposals is that they are inevitably just a band-aid. While additional time off work and government incentives may have short-term effects, they do not deal with the structural problems behind the drops in happiness that may be dampening sex drives.

Just as this problem is multi-dimensional, so the solutions must be multi-dimensional as well. Tackling the sexual decline will require dealing with the very causes of the mental health crisis facing Western worlds – a crisis that is underpinned by job and housing insecurity, fears of climate change, and the loss of communal and social spaces. Doing so will not just help people with their sex lives, but benefit health and wellbeing overall. 

Source : This article was published By Simon Copland

Published in Others
Much is known about flu viruses, but little is understood about how they reproduce inside human host cells, spreading infection. Now, a research team headed by investigators from the Icahn School of Medicine at Mount Sinai is the first to identify a mechanism by which influenza A, a family of pathogens that includes the most deadly strains of flu worldwide, hijacks cellular machinery to replicate.

The study findings, published online today in Cell, also identifies a link between congenital defects in that machinery—the RNA exosome—and the neurodegeneration that results in people who have that rare mutation.

It was by studying the cells of patients with an RNA exosome mutation, which were contributed by six collaborating medical centers, that the investigators were able to understand how  A hijacks the RNA exosome inside a cell's nucleus for its own purposes.

"This study shows how we can discover genes linked to disease—in this case, neurodegeneration—by looking at the natural symbiosis between a host and a pathogen," says the study's senior investigator, Ivan Marazzi, PhD, an assistant professor in the Department of Microbiology at the Icahn School of Medicine at Mount Sinai.

Influenza A is responsible in part not only for seasonal flus but also pandemics such as H1N1 and other flus that cross from mammals (such as swine) or birds into humans.

"We are all a result of co-evolution with viruses, bacteria, and other microbes, but when this process is interrupted, which we call the broken symmetry hypothesis, disease can result," Dr. Marazzi says.

The genes affected in these rare cases of neurodegeneration caused by a congenital RNA exosome mutation may offer future insight into more common brain disorders, such as Alzheimer's and Parkinson's diseases, he added. In the case of Influenza A, the loss of RNA exosome activity severely compromises viral infectivity, but also manifests in human neurodegeneration suggesting that viruses target essential proteins implicated in rare disease in order to ensure continual adaptation.

Influenza A is an RNA , meaning that it reproduces itself inside the nucleus. Most viruses replicate in a cell's cytoplasm, outside the nucleus.

The researchers found that once inside the nucleus, influenza A hijacks the RNA exosome, an essential protein complex that degrades RNA as a way to regulate gene expression. The flu pathogen needs extra RNA to start the replication process so it steals these molecules from the hijacked exosome, Dr. Marazzi says.

"Viruses have a very intelligent way of not messing too much with our own biology," he says. "It makes use of our by-products, so rather than allowing the exosome to chew up and degrade excess RNA, it tags the exosome and steals the RNA it needs before it is destroyed.

"Without an RNA exosome, a virus cannot grow, so the agreement between the virus and host is that it is ok for the virus to use some of the host RNA because the host has other ways to suppress the virus that is replicated," says the study's lead author, Alex Rialdi, MPH, a graduate assistant in Dr. Marazzi's laboratory.

Source : This article was published in

Published in Online Research

Researchers in Singapore have developed a way to teleport a drink over the internet — sort of.

Published in Online Research

Researchers introduce 'Splinter,' a system to quickly search databases without showing the server what's being searched for

When you search for something on the internet, it’s always been a given that your request will be recorded and stored — whether it’s a stock price, medical symptoms, or the cheapest air fare to Hawaii.

But there might be a better way — a way to quickly search large data without identifying a user’s query — a group of MIT researchers says.

Normally, in order to perform a basic search, you need to communicate with a server, which in turn needs to know what you’re looking for in order to find the appropriate results in its database. The downside is that each search you make reveals a huge amount of information about you, and that data is frequently mined to build user profiles and target ads around some of your most private interests, thoughts, and activities.

In a paper due to be presented at the 14th USENIX Symposium on Networked Systems Design and Implementation, the researchers introduce a system called Splinter, which they say would allow completely private searches.

Essentially, the system hides the user’s queries by breaking them up into encrypted pieces, each processed by a separate server. It then uses a technique called “function secret sharing,” which performs a mathematical function on every record in the databases and returns a matching result to the user. That result can’t be understood by the server — instead, it can only be read by the user once all the pieces are re-assembled on their local device.

“The canonical example behind this line of work was public patent databases. When people were searching for certain kinds of patents, they gave away the research they were working on,” said Frank Wang, an MIT graduate student and the paper’s lead author, in a press statement sent to Vocativ. “Another example is maps: When you’re searching for where you are and where you’re going to go, it reveals a wealth of information about you.”

Wang says the paper comes amid increasing demand for private web searches. He notes the popularity of search engines like DuckDuckGo, which gets search results from other sites like Google but claims it doesn’t store any information about users’ queries. Companies like Least Authority and SpiderOak offer something similar for cloud storage, using “zero knowledge” systems that ensure only the user can read their stored data.

“We see a shift toward people wanting private queries,” Wang said. “We can imagine a model in which other services scrape a travel site, and maybe they volunteer to host the information for you, or maybe you subscribe to them. Or maybe in the future, travel sites realize that these services are becoming more popular and they volunteer the data. But right now, we’re trusting that third-party sites have adequate protections, and with Splinter we try to make that more of a guarantee.”

The paper’s authors concede that it might be a while before something like Splinter becomes implemented into real-world services. But the researchers say that their function secret sharing technique greatly improves on previous experiments with hidden database queries, allowing searches to be run up to ten times faster.

“There’s always this gap between something being proposed on paper and actually implementing it,” said Wang. “We do a lot of optimization to get it to work, and we have to do a lot of tricks to get it to support actual database queries.”

Author : Joshua Kopstein

Source :

Published in Online Research

In today’s digital age, social media competence is a critical communication tool for academics. Whether you’re looking to engage students, increase awareness of your research, or garner media coverage for your department, engaging in social media will give you a competitive edge.

Here is a case study that demonstrates this point. When Marianne Hatzopoulou, a civil engineering professor at the University of Toronto, needed to get the word out about her study on cyclists, she resorted to Twitter. Hatzopoulou, who was researching the impact of air pollution on the behaviour of cyclists, posted a few tweets, encouraging people to fill out a survey. Twitter seemed daunting at first, especially that she had under 100 followers, but she tweeted, nevertheless, and encouraged her team to tweet as well.

Hatzopoulou’s Twitter activity caught the attention of a cycling magazine which published a blog post about her study. A reporter with the local paper Metro Toronto saw the blog post and reached out to her for more info. Their one-hour phone conversation led to a front-page story the next day about the hazards of air pollution. That media coverage put her on the radar of a major network, Global TV, and a radio show with the national Canadian broadcaster CBC, which, with Earth Day approaching, were looking for stories related to the environment. Hatzopoulou was inundated with media requests, but the publicity around her work was a researcher’s dream.

“It all started with a tweet,” said Hatzopoulou. “The reach we’ve had has been unbelievable.” The media activity, initiated by Twitter, has given Hatzopulou and her team great momentum as they prepare to take the study to New York City and Montreal.

But how do you use social media effectively to gain a competitive advantage? Here are some guidelines to help you maximize your impact online.

Build a targeted profile. Who are you trying to talk to on social media? What do you want to tell them? Answering those two questions will help you identify your audience, content and tone. Generating targeted content will attract a targeted audience. Make sure your profile bio on social media platforms, such as LinkedIn and Twitter, spells out the value you provide. Let’s say you are a scientist looking to make science fun and accessible, like Imogen Coe, a cell biologist and Dean of Science at Ryerson University in Toronto. Including the phrase “helping make science fun and accessible” in her Twitter bio is a clear indicator of the content she intends to share. Keeping her message focused on issues she’s passionate about in science has helped her build a large network of scientists around the world – from Canada and the U.S. to the UK and New Zealand.

Engage your audience in meaningful conversations. Speaking up about issues of interest to you and your audience will help position yourself as a thought leader in your space. That rings true for Coe, who took to Twitter last year to express her views on a story that created much buzz in the science community. She was responding to Science magazine’s career column, which advised a post-doc researcher to look the other way when the latter complained that her male supervisor was looking down her shirt. Appalled by that advice, Coe e-mailed Science magazine, offering alternative advice on how to deal with harassment. She then tweeted a screenshot of her e-mail, which was quickly retweeted and supported by scientists around the world.

A reporter with the Washington Post saw the tweet and contacted Coe to get her thoughts on the story. The next day, Coe’s comments appeared in the Washington Post. The dean’s social engagement has amplified her message and helped her garner media attention as a respected source in her field. More importantly, her voice and that of others resulted in the original advice column being removed and replaced with crowdsourced advice, including Coe’s, that helps the person being harassed.

Make social engagement a habit. Incorporate social media into your daily routine so you can stay up to speed on what your stakeholders and peers are talking about. A five-minute check-in on Twitter every day is more effective than one hour every two weeks. Go online, respond to others and engage your audience in conversations that matter to them.

“Social media really doesn’t take that much time. I tend to use it mostly in the evenings before bedtime and in between meetings,” said Santa Ono, president of the University of Cincinnati. Ono, who has over 69,000 followers on Twitter alone, is one of the most social media-savvy administrators in academia. His trademark hashtag #HottestCollegeinAmerica, which he initiated to promote conversations around his university, has caught on and is regularly used on Twitter.

Think before you post. While social media engagement is undeniably an effective tool in attracting media attention and raising one’s profile, it may also backfire. For example, when Ono shared a few years ago a picture of himself with a former president of the university who had been criticized for forcing the resignation of a basketball coach, his tweet quickly received a backlash of negative comments. He removed the photo within five minutes of posting it. Ono’s rule of thumb? Before pushing ‘send,’ he asks himself what the tweet would look like on the front page of USA Today, he told the Chronicle.

Get acquainted with your school’s social media policy and make sure your posts comply with the guidelines. You do, after all, represent your employer on social media, regardless of the “views are my own” disclaimers that may appear in your bio. For example, Coe, the Toronto school dean who received media coverage when she spoke up on Twitter, was posting comments that represented typical university policy on harassment, including her university’s. Tweeting about controversial issues against school policy may potentially cause problems for her, she admits.

It doesn’t matter whether you have a small following at first. Becoming a smart user of social media can help you translate your research into impact.

Source :

Published in Online Research
Page 1 of 2


World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.