Internasjonale nyheter

Influential Work from PLOS Authors Garners Lasker Awards

Plos -

The Lasker Awards recognize the contributions of scientists, physicians and public servants who have made major advances in the understanding, diagnosis, treatment and prevention of human disease. Each year since 1945, dedicated scientists benefit from the mission of the Albert and Mary Lasker Foundation to recognize research excellence, public education and advocacy.

As a champion of biomedical research, Mary Lasker worked to increase public appreciation for and government funding of medical sciences. As a result of her advocacy efforts, several NIH Institutes were newly created, including the National Heart Institute, the National Institute of Mental Health and the (originally named) National Institute of Neurological Diseases and Blindness. Lasker helped change the biomedical research landscape in the United States and the scientific community to this day benefits from her dedication.

PLOS is proud that six of this year’s seven Lasker awardees have published research or an interview with PLOS, and we are fortunate to benefit from the expertise of Charles M. Rice of The Rockefeller University in his role as an Academic Editor for PLOS Pathogens.

Here are the 2016 Lasker Award honorees with a summary of their PLOS research and interviews, spanning five journals and The PLOS Blog Network, for a collective total of 38 articles and two interviews. Oxygen sensing—an essential process for survival:

Gregg L. Semenza’s three PLOS ONE articles cover the role of NADPH oxidase in Hypoxia Inducible Factor-1α (HIF-1α) activation, the dependency on tumor suppressor p53 for macrophage migration inhibitory factor’s effect on HIF-1 activation, and the ability of HIF-1α to regulate the expression of cell adhesion molecule CD44.

Peter J. Ratcliffe published two PLOS ONE articles and one each in PLOS Biology and PLOS Medicine. Some of this work examines the relationship between the von Hippel-Lindau (VHL) tumor suppressor gene, HIF-1 and extracellular matrix in C. elegans, the role of VHL-HIF pathway in human cardiopulmonary physiology and function at standard and high altitudes, and most recently the investigation of compounds that inhibit the hypoxia sensors of the HIF system, the HIF prolyl-hydroxylases, with implications for therapeutic treatment of stroke or other diseases of cerebral ischemia.

DOI: 10.1371/journal.pbio.0020289

 

 

 

 

 

 

William G. Kaelin’s PLOS Biology article – published in the journal’s inaugural year – explores the relationship between inactivation of the VHL gene, subsequent HIF2α activity and renal carcinoma tumor formation.

Hepatitis C replicon system and drug development:

Charles M. Rice has eight articles with PLOS; three in PLOS ONE and five with PLOS Pathogens. His virology research – while primarily focused on hepatitis C virus (HCV) – also addresses arthropod-transmitted viruses in the Flaviviridae family, such as yellow fever virus, and natural inhibitors of HIV identified from simulation screening of the pan-African Natural Product Library followed by cell-based testing. A subset of Rice’s HCV work published in PLOS Pathogens covers direct deregulation of the cell cycle in HCV infection as a contributor to liver disease, host cell protein and lipid mapping to uncover temporal and global changes as a result of HCV infection and a mutational structural analysis of the p7 protein revealing positions important for particle assembly and infectivity.

DOI: 10.1371/journal.ppat.1000719 DOI: 10.1371/journal.ppat.1005297

 

 

 

 

 

 

Ralf Bartenschlager tops the PLOS list with 27 articles; 20 in PLOS Pathogens and seven in PLOS ONE. Select key early work on HCV includes the role of cyclophilin A in HCV replication and polyprotein processing, the role of HCV p7 protein as a membrane pore involved in production and release of infectious virions and the dependence of HCV envelope glycoprotein secretion on assembly of triglyceride rich lipoproteins.

Bartenschlager’s team also determined the nonstructural protein 5A (NS5A), a component of the viral RNA replication machinery, as a key factor for the formation of infectious HCV particles through an assembly determinant domain and lipid droplets. Bartenschlager’s seminal microscopy work on the intracellular membranes of HCV infected cells is visually stunning and included in the PLOS Pathogens 10th Anniversary Collection.

More recent articles describe use of a yeast two-hybrid screening strategy to generate an interactome of cellular proteins that may function with influenza virus non-structural proteins NS1 and NS2, potentially informing therapeutic interventions, and work on Dengue virus that provides a genetic map of determinants involved in viral RNA replication and extends the list of functions ascribed to the enigmatic nonstructural protein 1.

DOI: 10.1371/journal.ppat.1005277 DOI: 10.1371/journal.ppat.1003056

 

 

 

Discoveries in DNA replication and leadership in science and education:

In 2012, PLOS GeneticsJane Gitschier Interviews turned to this year’s Lasker-Koshland Special Achievement Award in Medical Science awardee Bruce Alberts for his memories of how he got into science and his thoughts on learning from failure and getting committees to reach consensus. More recently Alberts shared with PLOS his insights into issues facing scientists today, such as journal impact factors, new forms of recognition for contributions to the scientific publication process and the role of senior as well as junior researchers in changing the culture of science.

For those wanting more information on the significance of the work of this year’s winners and the award in general, The Lasker Foundation and Cell provide coverage.  Cell has also curated Collections dedicated to Hypoxia-Inducible Factors and virus infections. Much, but not all, of the content is Open Access.

PLOS has previously profiled author recipients of the 2016 Breakthrough Prize in Life Sciencesso bookmark The Official PLOS Blog and visit this site as future scientific prizes are awarded.

 

Image credit: The Lasker Foundation

PLOS appoints Dr. Joerg Heber Editor-in-Chief of PLOS ONE

Plos -

PLOS announced today that after an extensive search, Dr. Joerg Heber has been appointed Editor-in-Chief of PLOS ONE. Heber will be responsible for setting the editorial course of the journal and continue its mission of improving scholarly communication. His appointment is effective November 21, 2016.

“Joerg’s deep understanding of scholarly publishing and his passion for Open Access will be tremendous assets to me and our editorial staff, and most importantly to PLOS ONE’s 6,000 Academic Editors and our authors,” said Veronique Kiermer, Executive Editor of PLOS. “PLOS ONE has been a driver of changes in scientific communication since its launch ten years ago. It is an enormous responsibility and I am entirely confident in Joerg’s ability to lead the journal through its next phase, to further develop its mission and meet the needs of the scientific community.”

“I am delighted to be joining PLOS” said Heber. “PLOS’ commitment to Open Access and to innovation has been transformative, and PLOS ONE is ideally placed to support Open Access and open science with continued advancements in scholarly communication. I’m excited to work with the PLOS ONE team to serve science as a whole.”

Prior to joining PLOS, Heber was Executive Editor of Nature Communications. In this role Heber had responsibility for the journal’s overall editorial strategy. He was instrumental in Nature Communications transparent peer review initiative, implementing its Data Availability Statements and contributed to the journal’s move to full Open Access publishing. Heber also worked as a Senior Editor for Nature Materials and his previous experience includes a visiting professorship at the University of Tokyo and lecturer at Philipps-University Marburg, Germany.

Heber obtained his PhD in semiconductor physics at Imperial College London, UK, and did post-doctoral work at Bell Labs, New Jersey.

Riding A Wave Towards Improved Truth in Science Communication

Plos -

It is an exciting time in scientific publishing. Initiatives such as digital identifiers for authors through ORCID, more granular recognition of collaborative work with standardized language for specific roles with CRedIT, and more competition in the Open Access publishing world benefit researchers and move the scientific endeavor toward a more transparent and accountable future.

Yet, the write up and publication of results is one of the most challenging aspects of the endeavor, with peer review and reproducibility at the heart of this stage of the research lifecycle. We have previously acknowledged on The Official PLOS Blog that the public

“relies on the belief that content published in peer-reviewed journals is trustworthy, despite the fact that this is too often not the case.”

We have also acknowledged that we must do better: all stakeholders, including publishers, are accountable. Although the overall concept of peer review is an accepted form of quality control and valued by the scientific community, in practice it suffers from imperfections that prevent it from achieving that one great thing: advancing research communication.

In a thoughtful consideration of Truth in Science Publishing: A Personal Perspective, Thomas Sudhof eloquently describes peer review and reproducibility as flawed checkpoints that impair the “validity of published scientific results” and impede trust in science.

As a recipient of both the Nobel Prize and the Lasker Award for his work on synaptic transmission, Sudhof brings perspective and integrity to his thought leadership. Highlighting hidden conflicts of interest, too little accountability for journals and reviewers, and lack of competition between journals as three problems with peer review that have “corrupted the process, decreasing its value,” Sudhof endorses more transparency in the peer review process to reduce bias.

At PLOS there are a range of ways to improve the process without diminishing those aspects that the community values. Current tools and systems that address these limitations include the posting of research to preprint servers before formal publication, to enable researchers to improve their work and share it earlier. There is an opportunity to improve review forms that may be cumbersome or insufficient to provide thoughtful and constructive feedback to authors. Appropriate and rigorous reviewer and editor training can help to mitigate potential reviewer bias and mentor early career researchers. With improved technologies and processes, publishers have an opportunity to improve efficiencies, quality, trustworthiness and authenticity of the process.

As for reproducibility, Sudhof outlines increasingly complex experiments that are impossible to reproduce, “tweaked or selected” results that do not hold up with repetition, lack of validation of reagents and methods, and the “near impossibility” of publishing negative results as contributors to the problem.

Providing opportunity to showcase peer-reviewed articles that address the reproducibility issue is an important value of PLOS and PLOS ONE; the journal welcomes submission of negative, null and inconclusive results. PLOS Biology’s Meta-Research section welcomes experimental, observational, modeling and meta-analyses that address research design, methods, reporting, verification or evaluation.

PLOS Biology and PLOS Genetics authors can contribute to the reproducibility effort by identifying model organisms, antibodies or tools with a unique Research Resource Identifier (RRID). PLOS is a part of the Research Resource Identification Initiative, a cross-publisher effort to promote reproducibility in science and enable effective tracking of the use of particular research resources across the biomedical literature.

PLOS works toward a future where research is published without unnecessary delays, and continual assessment and commentary is provided by a robust and ethical system of visible, engaged pre- and post-publication peer review. We strive to engage a global editorial and reviewer contributor community, appropriately trained, recognized and incentivized. With regard to journal-facilitated peer review, rigorous input from experts in a relevant field of research is highly valued by both authors and readers, and contributes to trust of research results for working scientists, clinicians, patient advocates, policymakers and educators.

Addressing the issues and challenges that perversely incentivize unreliable research or prevent peer review from achieving its scholarly ideal will not be easy or quick. The challenges are substantial and the solutions must be as well, and satisfy a diverse researcher and stakeholder community. Broader adoption of reproducibility efforts and better recognition for the range of contributions made by researchers and reviewers will not be enough without the engagement of early career researchers, junior investigators and senior leadership with the power to influence change.

 

Image credit: one-vibe, pixabay.com

The simple magic of reuse, sharing and collaboration

GoOpen.no -

Two weeks ago I posted a blogg with a timeline of OER. After reading this, my friends in Addis Ababa, Ethiopia picked up the timeline and translated it into Amharic. This involved a different language, different plattform and context. The common thread is H5P, a tool I have blogged about many times before, that allows anyone to create, share and reuse interactive HTML5 content in their browser.

 

The important thing to notice here is that the team in Addis could reuse all the effort that I put in the timeline and at the same time just by translating it the timeline was available in a new language, something that would be impossible for me to do simply because I don´t know Amharic.

There is a growing edTech and OER community in Addis and this last weekend they organized a workshop where they also made their own timeline describing important events in Ethiopian history(see it at the end of the bloggpost). As a part of the same workshop they made an interactive test where you can test your skills on the most common Amharic words.

 

This put me up to the idea that I could make a new resource based on what they have made, and in fact make an OER in Amharic, a languages that I do not master. How? I made all the «cards» in the object below based on text from the team in Addis. Our common ground is that we all understand English.

 

When advocating for Open education resources, open source and open standards the message sometimes is lost in the complexity of all the technical issues. I myself have on more then one occasion struggled to explained the «magic of OER». In this case working with a small usecase like this just seams like a great way to demonstrate the magic of open educational resources.a

Check out this timeline on Ethiopian history:

What can the «anti OER lobby» learn from former Microsoft CEO Steve Ballmer?

GoOpen.no -

Occasionally I bump in to representatives from the «anti OER lobby» and they often start of by talking about how open educational resources ruins the marked, and if the OER is financed with public money they go on about how the government is using their position to compete in the marketplace handing out «free content».

The problem with this claim is of course that it belongs in another paradigme, a paradigme without what we now call «the internet». This is a global issue but we could use Norway as an example. The idea that the Norwegian government, municipalities and counties should not be able to let teachers(with public paycheck) share content on the web under a free license is just ridiculous.

Last week I met a guy from an organization that lobby hard against OER and while talking to him I came to think about Steve Ballmer, former CEO at Microsoft. It was sort of a deja vu moment and it took me back to 2001.

During an interview with the Chicago Sun-Times on June 1, 2001 Ballmer said that «Linux is a cancer that attaches itself in an intellectual property sense to everything it touches»

15 years later Microsoft has shifted their stands completely and invest substantially in open source and even Balmer him self is quoted saying «We now considers that the threat from Linux is over». Current chief at Microsoft Satya Nadella took it even further and went public 2 years ago saying that Microsoft loves Linux.

In the 15 years that has past Microsoft has lost its position in many markets and is now overtaken by Google and Android in the mobile market while Linux dominates everything from the server market to devices running in cars or in the kitchen.

For anyone that has been a part of both the open source movement and the OER movement its obvious that they share principles,  philosophy and methodology.

So my simple question is: What can the «anti OER lobby» learn from former Microsoft CEO Steve Ballmer?

We value “Open” as a fundamental quality in education and in our learning resources.

GoOpen.no -

“Open” produces better outcomes than “Closed”. This gives us a new responsibility. We must now prioritize our time and resources accordingly. The time has come to value “Open” as a fundamental quality in education and in our learning resources. – Head of NDLA, Øivind Høines

The Norwegian Digital Learning Arena (Nasjonal digital læringsarena) is a joint enterprise operating on behalf of the county councils in Norway. Our goal is to develop and publish high quality, internet-based open educational resources (OER) in subjects taught at upper secondary school level and make these freely available.

The term “open” is a cornerstone in all our projects and an important part of our strategy as we develop new subjects and open educational resources. From the beginning in 2007, head of NDLA Øivind Høines and his team started working on how NDLA could build the plattform, content and organization with “Open” as an important quality.

For NDLA as an organization this materializes in four focus areas:

  • Open standards
  • Open source
  • Open interfaces
  • Open methodology
  • Open standards

    A major reason for us at NDLA to use open standards is that we would like our content to be reused and remixed by anyone. By using open standards we aim to make it easier for systems from different parties using different technologies to interoperate and communicate with our content and technology.

    Another important aspect of open standards is to hinder confinement to a single vendor or proprietary technology, and to provide better conditions for free competition between all technology vendors and content creators. Open standards set out to prevent unfortunate interlocking, monopolization and competition bias.

    An important area of focus is the use of standardized protocols and specifications where it is deemed relevant. This is pertinent both in between components internally in the NDLA solution, but also in NDLA’s communication with third-party services.

    A few examples of such standards and specifications:

    • HTML5: a mark-up language intended for the formatting of webpages with links and other information that can be viewed in a browser|, and which is used to structure the information. HTML5 incorporate several new kinds of content (e.g. audio and video) than previous versions than the HTML standard.
    • CSS: Cascading Style Sheets is a mark-up language used to define the layout of files written in HTML or XML.
    • Tin Can: a standardized API for learning technology making it possible to gather data on user experiences. To a larger extent than today, NDLA will be built upon this notion of open standards and known specifications.
    Open source

    Open sources is an important part of all development at NDLA. We have based our plattform on Drupal and contributed significantly to the development of H5P as a platform for easier creation, sharing and reuse of the developed content and applications.

    H5P is not a standard, but an implementation that supports HTML5. H5P is being used for the development of different kinds of interactivity in NDLA. H5P is an open source-based framework for the development of HTML5 based content (video, interactive presentations, multiple choice assignments, timelines, etc.). We are proud to say that more than 2400 websites all over the world now run H5P.

    Why open source?

    Open source software is software that is distributed with the assumption that the source code is being made readily available for reuse. The opposite is software that keeps the source code secret/closed or protected through legislation. The main strategy of NDLA has always been geared towards open source , but in certain contexts it has proven difficult to avoid using third-party products or components that follow other regimes of licencing. In the future, NDLA will go further and demand open source software in all vital parts of a solution.

    Open Interfaces

    We are interested in sharing our content in any way we can. In addition to developing our own website and servise we develop AAPI’s (i.e. application programming interfaces) or open interfaces to make it easier to reuse our content by any third-party.

    By developing and using such open, well-documented API’s, NDLA will facilitate a modularity that deems the solution more service based and flexible to change. Additionally, both the data and the modules become easier to reuse by third-party.

    What is an API?

    API’s (i.e. application programming interfaces) are the interfaces between different software components. API’s link the components together in standardized ways. The API describes what will happen in different circumstances, e.g. finding or saving specific data in a database. An open API is an interface that is openly described, i.e. that is a known matter how it operates so anyone can develop a solution that can link to and benefit from it.

    Open methodology – crowdsourcing

    For us at NDLA, crowdsourcing is an methodology where the individual teacher and pupil can create, co-create and develop content themselves. The concept of crowdsourcing makes it possible for a larger group of people, e.g. teachers, to revise an academic plan, curriculum or the actual content in learning resources.

    Crowdsourcing is a work practice based on voluntary participation, where a large amount of contributors execute a task based on a sense of community, participation and self-organization, rather than managerial control. Numerous actors thus contribute to the improvement of quality on a specific product.

    The word “Open” has for us a pedagogical foundation. Learning as an activity thrives in an open landscape where information is truly liberated and free. We learn better when we freely can participate, when we openly share what we make, when we are allowed to remix the work of others, and our own contributions becomes part a wider and connected society. – Head of NDLA, Øivind Høines.

     

    OER Global Search – makes it easy for you to find open educational resources

    GoOpen.no -

    The last couple of weeks I have been working on a project that I have called OER Global Search. The idea behind OER Global Search is to make it easy for you to find educational resources that allows reuse, re-contextualization and translation.

    It can be very difficult for users to distinguish between what is called Open educational resources and other services that simply provide content for free. Even some websites that use the term Open in their name are not always offering content with a free license. For individuals or projects that plan to change, re-mix or translate content it is important to find OER, not free as in gratis.

    OER Global search solves this by using what is called Google Custom Search targeting 15 to 20 of the most widely used websites that are not only free, but actually offer content with free license.

    The most well known OERs are MIT OpenCourseWare, Khan Academy and CK-12.org. Here is a complete list of the service that are included in the search. http://searchoer.com/list-of- oer.html

    We are seeing a dramatic increase open educational resources covering different subjects at all levels. At the launch of our service a keyword such as “Algebra” returns 387.000 results. The technical development of the service is fairly simple so the main focus will be to develop the search further by identifying good services in different languages.

    The main language on the web is English but we also included some resources languages Hindi, Spanish, Norwegian, Portuguese and French.

    GoOpen Talk with Meredith Jacob

    GoOpen.no -

    In this GoOpen Talk I have a conversation with Meredith Jacob, Assistant Director at American University Washington College of Law. Meredith is a part of the legal team at Creative commons US and a leading expert on IP and Copy right issues. In this videoblogg she talks about the OER situation in American schools and the GoOpen campaign launched by the The U.S. Department of Education.

    GoOpen talk with Meredith Jacob from GoOpen.no on Vimeo.

    Author Credit: PLOS and CRediT Update

    Plos -

    Our January update on author credit focused on how PLOS was moving forward with the use of ORCID identifiers (iDs) for researcher identification. Starting with authors, that effort allows us to know and unambiguously credit who participated in the work being published and forms the base for plans to eventually provide credit to all participants in the research outputs ecosystem. Today’s update is about providing authors attribution for what they contributed. Specific and comprehensive attribution moves the needle for institutions’ and funders’ abilities to evaluate researchers based on the roles they play in published works, rather than on the journals in which their articles appear or their placement within the byline.

    Collaborative Development

    PLOS has for many years required that authors state what contributions they made to their work, as have many other publishers. The author contributions statements published in articles provide transparency in credit and accountability for all authors. What’s new is that there is now a community-developed open-standard taxonomy of contributions intended to replace over time the many disparate lists currently in use.

    PLOS participated along with many other publishers and stakeholders (including funders, researchers and university administrators) in the development of this taxonomy, under the auspices of CASRAI (Consortia Advancing Standards in Research Administration Information) and with the participation of NISO (National Information Standards Organization). Articles in Learned Publishing and Nature – and related documents on the CASRAI site – provide background about the work that led to this open standard.

    Author Benefit

    For a given published work, the CRediT taxonomy makes transparent who participated and the roles they played. It remains simple by design but offers more granularity than previous lists used by PLOS and other publishers. More finely-grained information will help make the ordering of authors less important and will facilitate a shift in focus for tenure and promotion committees – and other evaluators – away from how many times an individual is a first-or last-named author and toward their specific contributions to the scholarly record.

    Importantly, the CRediT taxonomy is not meant to determine who qualifies as an author. Each author on a paper may have one or more CRediT contribution roles, yet having a role described by the taxonomy does not automatically qualify someone as an author. Authorship is determined by following PLOS guidelines, which are based on the ICMJE (International Committee of Medical Journal Editors) requirements.

    As PLOS continues to implement its new submission system, Aperta™, we will make author contributions machine readable, with each individual’s contributions coded into the article’s XML. This is already in place for PLOS Biology, the first journal to launch in Aperta. For every article (identified by a Crossref DOI) and every author (all to be identified – eventually – by an ORCID iD), there will be one or more associated contributions (identified by CRediT). It is the confluence of these persistent identifier systems that will underlie future applications to increase transparency and allow discovery of individual contributions.

    Future Functionality

    Eventually, the coding of individual contributions in article metadata will allow contributions to be surfaced in CVs and researcher profiles. In the short term, it will improve the display of contributions within PLOS articles, currently presented in paragraph form within the article—whether PDF or HTML. The mock-up images here illustrate the approach PLOS is exploring for presentation in the author tab and for roll-over display in the author byline.

    Process and Policy

    The corresponding (or submitting) author will be required to provide the relevant contributions for their co-authors, just as they do now, when submitting a manuscript (see our Authorship Guidelines). We strongly encourage each group of researchers to think about, discuss and decide on their various contributions during the course of manuscript preparation. The task of assigning contributions to individuals should be collegial, and the corresponding author should ensure that contributions are agreed on amongst authors before submission, in the same way that the ordering of authors should be agreed on before submission. The CRediT taxonomy offers a framework for discussion to reach this agreement.

    It’s worth repeating—before submission, decide and get agreement on:

    • Who will be included in the author list
    • What contributions each author has made
    • In what order the authors will appear

    And if there are contributors whose input does not rise to the level of authorship, ensure that proper acknowledgements are included. Every person named – authors and those acknowledged – must be aware of and agree to their inclusion. When preparing your next manuscript, take some time to discuss author contributions using CRediT as a common language. Don’t have an ORCID iD yet? Get one here and log in to the PLOS manuscript submission system with it—when your next article is published with PLOS we’ll automatically update your ORCID record. In the future, that update will also include your contributions.

     

     

    Image Credit: Fabricio Rosa Marques

    Measuring Up: Impact Factors Do Not Reflect Article Citation Rates

    Plos -

    This special blog post is co-authored by PLOS Executive Editor Véronique Kiermer, Université de Montréal Associate Professor of Information Science Vincent Larivière and PLOS Advocacy Director Catriona MacCallum. It accompanies the posting on BioRxiv of a research paper on citation distributions.

    Journal-level metrics, the Journal Impact Factor (JIF) being chief among them, do not appropriately reflect the impact or influence of individual articles—a truism perennially repeated by bibliometricians, journal editors and research administrators alike. Yet, many researchers and research assessment panels continue to rely on this erroneous proxy of research – and researcher – quality to inform funding, hiring and promotion decisions.

    In strong support for the shedding of this misguided habit, seven journal representatives and two independent researchers – including the three authors of this post – came together to add voice to the rising opposition to journal-level metrics as a measure of an individual’s scientific worth. The result is a collaborative article from Université de Montréal, Imperial College London, PLOS, eLife, EMBO Journal, The Royal Society, Nature and Science, posted on BioRxiv this week. Using a diverse selection of our own journals, we provide data illustrating why no article can be judged on the basis of the Impact Factor of the journal in which it is published.

    The article presents frequency plots – citation distributions – of 11 journals (including PLOS Biology, PLOS Genetics and PLOS ONE) that range in their Impact Factor from less than three to more than 30 (the analysis covers the same period as the 2015 Impact Factor calculation.) Despite the differences in Impact Factors, the similarities between distributions are striking: all distributions are left-skewed (a majority of articles with fewer citations than indicated by the JIF) and span several orders of magnitude. The most important observation, however, is the substantial overlap between the journal distributions. Essentially, two articles published in journals with widely divergent Impact Factors may very well have the same number of citations.

    Share and share alike

    By publishing this data, we hope to strengthen a call for action originally voiced by Stephen Curry, one of the authors, and to encourage other journals to follow suit. In the spirit of this call we present below the plots for all seven PLOS journals [see Fig. 1]. Needless to say, there are no surprises. Despite widely different volumes, all distributions show a marked skew to the left (low citations) with a long tail expanding to the right (high citations)—a pattern obscured by use of the JIF.

     

    Fig. 1: Citation Distributions of the PLOS Journals. Citations are to ‘citable documents’ (as classified by Thomson Reuters), which include standard research articles and reviews; distributions contain citations accumulated in 2015 to citable documents published in 2013 and 2014. Data was extracted using the “Purchased Database Method” detailed in the V. Larivière et. al. BioRxiv article. To facilitate direct comparison, distributions are plotted with the same range of citations (0-100) in each plot; articles with more than 100 citations are shown as a single bar at the right of each plot. Copyright held by Thomson Reuters prohibits publication of the raw data but aggregated data behind the graphs is available on Figshare.

     

    We do not deny there are differences among journals, which reflect the different article types, editorial criteria, scope and volume of each publication. These effects are notable, for instance, when considering PLOS ONE—where articles are selected on the basis of being technically sound and robustly reported rather than on perceived impact or general interest. Such criteria enable the publication of small studies or those with negative, null or inconclusive results, which might not garner many citations but are crucial in mitigating against publication bias. The journal scope comprises disciplines with different citation habits and niche areas of research as well as social sciences, where citation rates are typically lower. And since volume of publication is not artificially limited, these factors provide an explanation for the relatively higher number of articles with few or zero citations (a similar explanation can account for the distribution of citations in Scientific Reports). This does not mean that PLOS ONE (or Scientific Reports) does not publish many highly-cited articles. To the contrary, a 2013 study indicated that PLOS ONE publishes its fair share of the top cited papers in the literature (relative to the number of papers it publishes). In the 2015 distributions, the volume of papers making up the high-citation tail of the PLOS ONE distribution is again substantial.

    The sway of influence

    What motivates our initiative to raise awareness is that despite calls to the contrary, the JIF remains a prevalent tool in evaluating scientists. Often it comes down to convenience, lack of time and appropriate alternatives, but it is also a question of culture. The misuse of the Impact Factor has become institutionalized in the research assessment methods of many universities and national evaluation panels, leading to a perverse incentive system.

    For researchers, the career advancement and reputational reward of ‘aiming high’ when choosing a journal is too great to ignore, even when the consequences are to work one’s way down the Impact Factor ladder one step at a time, rejection after rejection. This sequential submission pattern not only puts an enormous burden on journal editors and reviewers, it also causes unnecessary and unacceptable delays in making results available to the wider scientific community and the public. Worse are the stories of researchers who feel compelled to alter their experimental or analytical approach to make the manuscript more attractive to one journal or another. The profound consequences are manifest in other ways—a strong disincentive to pursue risky and lengthy research programs, to publish negative results or to pursue multidisciplinary research. They also provide a potent motive to flood fields that are already over-crowded and entrench a hypercompetitive system that increasingly disadvantages graduate students and early career researchers.

    Action in process

    There is no escaping the fact that a paper can only be properly evaluated by reading it. However, there are tools to help filter the scientific literature for reach and impact that an article might have, and not just within the scholarly research community. Several platforms offer article-level metrics, including PLOS’ own ALM service, which provides citations and other indicators of readership and social attention. The open source software Lagotto powering PLOS ALMs underpins Crossref’s Event Tracker to capture a range of usage activity linked to any digital object identifier, including datasets.

    No single metric, however, can accurately reflect the diverse impact of different research outputs (as clearly laid out in the Metric Tide report and the Leiden Manifesto for research metrics). Ultimately, the scientific community needs a better means of capturing and communicating the assessment of validity, reliability, significance and quality that takes place over time, when experts engage deeply with and build upon the results of their peers.

    We also need more granular and robust ways of describing and assigning credit to the myriad different contributions of individual researchers to articles, data, software, research projects, peer review and mentoring students. Towards this aim, PLOS and other publishers are starting to require that authors register for an ORCID ID, and are introducing the CRediT taxonomy to recognize the individual contributions of authors to an article. In the EU, Science Europe has just issued a report on how to evaluate multidisciplinary research that includes a recommendation for funders to evaluate applicants on a range of outputs, rather than just on publication record.

    These are all welcome steps but ultimately, the culture will only change when the institutions responsible for overseeing the assessment of researchers and those who constitute the evaluation panels take active steps to change how they assess scientists.

    Meanwhile, the message from journal editors and publishers that show their citation distributions is clear: we select and publish diverse articles that attract a wide range of citations, and no article can be adequately judged by the single value of the Impact Factor of the journal in which it is published.

     

    Image Credit: Gerd Altman, Pixabay.com

     

    Open Innovation and the Creation of Commons

    Creativecommons.org -

    In March we hosted the second Institute for Open Leadership, and in our summary of the event we mentioned that the Institute fellows would be taking turns to write about their open policy projects. Below is a guest post by IOL Fellow Katja Mayer, a postdoctoral researcher in Science, Technology and Society at the University of Vienna.

    IOL2 at work by Cable Green, CC BY 2.0

    As a sociologist of science, I am interested in how scientific research, technological innovation, and society are linked together. I was always fascinated by the open source movement, and this fascination grew into a strong advocacy when I started to use free and open source software myself to collaborate with fellow scientists. When I first heard about the open science movement several years ago, I was immediately convinced that it not only makes an interesting object of research (I’m currently working on open research data practices), but also that I would like to help to spread open science practices to my communities. In addition to integrating open science related topics and methods into my teaching, I joined collective efforts to push for open science in national and European science and research policies. I am an active member of the open science workgroup of Open Knowledge Austria, and became a member of the Open Access Network Austria as a member of the steering group for implementing a national open access strategy.

    Being able to join the Institute for Open Leadership in March 2016 boosted both my professional development and my confidence in working on a transition to open science. The wonderful feedback I got from this group of inspirational individuals from all over the world still resonates, and continually helps me to shape my vision for the project that I’ll start in September: Exploring best practice examples of Open Innovation and the creation of Commons.

    IOL2 Fellows by bella_velo, CC BY 2.0

    The idea for this project was developed in Cape Town as a reaction to European policy rhetoric at the time appropriating terms such as “open science” and “open innovation”. We heard pronouncements like, “Europe is not productive enough. In Europe we are not succeeding in transforming research into innovation. Our knowledge is commercialized elsewhere.” These and similar descriptions of Europe’s problematic standing with regard to innovation form the main narratives in policy strategy documents that suggest the solution lies in the “open”. In other words, open innovation and open science should help to create jobs, spur economic growth, and make Europe competitive in terms of the commercialization of knowledge production. What was so alarming in this rhetorical policy move was its monopolization of the term “open” and its one-eyed description of knowledge circulation and sharing. It is one-eyed because its focus rests on a specific economic theory of open innovation, rather than the diverse and longstanding types of openness already practiced by countless people around the globe.

    It’s a worthwhile idea that we should enable broad access to knowledge by fostering a stronger culture of entrepreneurship that can lead to the development of new products and services. But this approach lacks an understanding of the potential interplay of traditional and alternative markets, and new and unusual forms of value creation beyond the typical exploitation of intellectual property rights. Also, this framework for “openness” remains vague in its description of the relationship between science and business, and in how collaboration could result in forms of value capture that benefits all relevant stakeholders, especially those who funded the research. Open licenses and open policies are only rarely mentioned. When they are, it’s only in the context of best practices of others such as the Gates Foundation (see i.e. Moedas 2016)

    Cape Town present for IOL2 Fellows by tvol, CC BY 2.0

    The objective of my open policy project is to crowdsource the collection of best practices of the creation of common goods and shared resources—beyond the one-eyed economic vision currently used to describe open innovation. I wish to investigate how such projects and models have created new markets and new opportunities. By end of September 2016 I will launch a website with a form to input basic information and media of open projects that would widen our understanding of what is possible in support of open innovation. Besides a database where such best practices are stored, I hope to create an interactive diagram with the help of other IOL participants. The diagram will depict selected open projects in relation to each other and across core characteristics of open innovation and the open movement. This way, politicians, administrators, and scientists can have a good sense of the existing open innovation ecosystem today. If you are interested in collaborating, please send me a short email at commons.innovation@gmail.com.

    “New knowledge is created through global collaborations involving thousands of people from across the world and from all walks of life.” – Commissioner Carlos Moedas, May 2015

    Envisioning an interactive diagram as a tool for understanding the potential of the open innovation movement results from my wish to make it more coherently visible in teaching. To counter a uniform narrative of open innovation, it’s important to show the manifold dimensions of the open movement. Furthermore, I am particularly interested in the multiplicity of dimensions of openness, including which forms of openness are realized depending on the kind and scope of resources, projects, or works.

    In his 2003 book Open Innovation: The New Imperative for Creating and Profiting from Technology, Henry Chesbrough defined “open innovation” as innovation transcending the boundaries of the organization conducting it and hence as motor of productivity and growth. His notion of openness argues against the characterization that innovation is a linear process. Instead, open innovation introduces new forms of cross-sector and cross-organizational collaboration in knowledge production and design processes. (Note: We still see a linear innovation model today because of current measurement methods and statistical indicators. See Godin, 2006 for more).

    Today, a broader conceptual framework for open innovation is embedded in an integrated approach to openness. It is a vital element of the open movement and should not be taken out of this context.

    Graphic by Katja Mayer, CC BY 4.0

    Open innovation is transcending the boundaries of traditional knowledge production and fosters cross-fertilization of knowledge. It can serve both as a trigger for change towards openness and a cross-connector of multiple segments of the open movement.

    In an ideal interpretation of open innovation, we would follow the Open Definition, which means that anyone can freely access, use, modify, and share the content for any purpose (while preserving provenance and openness). But in practice, openness—in its many shades—cannot be reduced to a singular definition. However, we can emphasize its main characteristics:

    The open movement rests on common principles such as sharing and collaboration, transparency and participation, quality improvement and enhancement of positive societal impact by co-created shared values. Its core focus is on the actors and communities of openness, their skills and their mind-sets, and their abilities to openly innovate. Without an open ecosystem comprising important elements such as open policies and open licenses, open education, open source, open standards, and open science, open innovation would not be possible. Although it can create and shape markets, fostering the diversity of open business models, open innovation is offering more than just economic impact: it has the potential for structural change in open societies (which goes far beyond the idea of rapid adoption of new technologies).

    Similarly, the open science movement is based on the idea that scientific knowledge of all kinds should be openly shared as early as is practical in the research process. The future of scholarly communication – as envisioned by the Vienna Principles – is based on open access to scientific publications and research data. Even more radically, it calls for the participation of all relevant stakeholders in research design and evaluation. Open scientific methodology enables new forms of participation and interaction in order to build and maintain sustainable eco-systems for co-creation. In an innovation context, emphasis should not only be put on the traditional commercialization of research outcomes. Open innovation in science should enable new public spheres, the creation of common goods, and other benefits enabled by an information commons—as explored by Ostrom in her 1990 book Governing the Commons: The Evolution of Institutions for Collective Action.

    Open science and the knowledge commons are already highly impacting innovation in society through the development of initiatives such as the Human Genome Project. Collective efforts to study the Zika virus or the US presidential call for an open cancer research initiative will foster new forms of open knowledge production and dissemination, as will any science policy with a strong mandate for open access and open research data. I think it will be of utmost importance to make the case for multiple knowledge markets—where open knowledge practices and commercialization can work in tandem for the benefit of rights holders and the broader public. Therefore, policy urgently needs to address open licensing models. Open innovation should strive to achieve the synergy of commercial and alternative markets, and support new, participatory forms of knowledge production and dissemination. By collecting past and present best practices (and also failures) from the open movement, I hope we can come to a better understanding about open innovation in service of a collaborative and productive commons in the future.

    Please join us in our effort to make the open innovation and open science multiple more visible by collecting infos on best practices. If you are interested please send a short notice to commons.innovation@gmail.com and you will receive updates about the project kickoff in September.

    Katja Mayer
    http://homepage.univie.ac.at/katja.mayer 
    Twitter: @katja_mat

    Further reading:

    Fecher, B., & Friesike, S. (2014). Open science: one term, five schools of thought. In Opening science (pp. 17-47). Springer International Publishing. http://book.openingscience.org/basics_background/open_science_one_term_five_schools_of_thought.html

    Chesbrough, H. (2003). Open Innovation: The New Imperative for Creating and Profiting from Technology. Harvard Business School Press.

    Godin, B. (2006). The Linear model of innovation the historical construction of an analytical framework. Science, Technology & Human Values, 31(6), 639-667.

    Mayer, K. (2015). Open Science Policy Briefing. ERA Austria  http://era.gv.at/object/document/2279

    Mayer, K. (2015). From Science 2.0 to Open Science: Turning rhetoric into action? STCSN-eLetter, 3(1). http://stcsn.ieee.net/e-letter/stcsn-e-letter-vol-3-no-1/from-science-2-0-to-open-science

    Nielsen, M. (2011). Doing science in the open. http://michaelnielsen.org

    Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action. Cambridge University Press.

    Have a look at the diagram by the P2P foundation: https://wiki.p2pfoundation.net/Everything_Open_and_Free_Mindmap

    The post Open Innovation and the Creation of Commons appeared first on Creative Commons blog.

    Spotlight on Gage Skidmore, political photographer

    Creativecommons.org -

    Gage Skidmore is a photographer and freelance graphic designer living in Phoenix, Arizona whose high-quality photos of politicians and pop culture have been featured in diverse publications including The Atlantic, MSNBC, Fox News, and The World. The ubiquity of Skidmore’s photos are a testament to his extraordinary success through open licensing.

    The 22-year-old started taking photos in 2009 during Rand Paul’s Senate campaign, uploading all of his photos under a CC BY-SA license. Since then, he has accumulated over 1 million photo credits and 1.2 million views on his page. In addition to political photography, Skidmore has been the official photographer for a variety of events and publications, uploading over 45,000 photos to his Flickr account.

    Skidmore answered questions over email from CC’s Eric Steuer, discussing his success as a photographer, passion for politics, and how the CC license fuels his work.

    What was the first photo you made of a politician? What were the circumstances surrounding that shot?

    The first ever political event I attended was an event in Louisville, Kentucky in November 2009, when I attended a healthcare town hall being hosted by the U.S. Senate campaign of then-ophthalmologist Rand Paul. I was a big supporter of his dad, Ron Paul, in his 2008 campaign, and at the time I lived in Indiana, so I was only a couple hours from Kentucky. Over the course of that year I decided to start documenting his campaign, mostly as a supporter, and attended a couple events a month. I uploaded all of these photos onto Flickr under a Creative Commons license for people to use.

    Rand Paul at Volunteer Phone Bank, Manchester, NH, Photo by Gage Skidmore CC-BY-SA 2.0

    How many political photos have you published since then? What is your typical process for getting these shots?

    I’m not entirely sure on the exact amount. The two main things that I cover are politics and pop culture conventions like Comic Con. I’ve uploaded close to 45,000 photos, and most of them are probably politics related.

    When did you decide to start using CC licenses to make your photos available to the world? And why did you make this decision?

    I saw Creative Commons as a vehicle to help get my photos disseminated easily very early on. Through my involvement with projects like the Wikimedia Commons, I learned about Creative Commons licensing, and chose the license that I thought best fit my desire for my photos to be used in the proper manner. Attribution was very important to me, and still is.

    Hillary Clinton with supporters, Photo by Gage Skidmore CC-BY-SA 2.0

    Since then, your photos have been used in a variety of ways. Do you notice that they’re mostly used by media outlets? What other ways have you noticed people using your work?

    My photos have been used by a lot of different websites, news sites, and sites like Wikipedia, and I’m very happy to see this. I really enjoy seeing my photos being used, especially if they comply with the CC-BY-SA license and attribute me.

    Do people typically contact you to let you know they’ve used your work? Have there been any particularly interesting conversations (or stories or even commissions?) that have come out making your work available to the world?

    I’ve had people email me just to make sure that I am attributed properly, or to ask permission to use my photos. I was involved with documenting the 2016 campaign, so I did have interactions with some of the campaigns who wanted to use my photos while also abiding by the photo license.
    One misconception that a lot of people have asked me about is in regards to the main photo on Donald Trump’s website. It is one of my photos, and his campaign actually attributed me at the bottom of his website. Many people assumed from this that I was a supporter of his, or worked for him in some way, neither of which is true. The Trump campaign simply found my photo, used it on their website, and attributed me for my work.

    Donald Trump Campaign Website Banner, Photo by Gage Skidmore CC-BY-SA 2.0

    At CC, we’re specifically interested in how creators contribute to a culture of sharing and gratitude by making their work available under CC. What’s been your experience as someone who puts a lot of high value work out there under CC licenses? Do you find that people are grateful for your contributions?

    I’ve had a great amount of positive reception from people thanking me for providing quality images of certain people over the years under a Creative Commons license. Wikimedia Commons is one such community that I believe truly embraces its contributors and tries to create a library of images that are Creative Commons or public domain. I’m very much glad to be a participant in this project.

    Has the approach you employ helped create any opportunities that might not have been available to you otherwise?

    Since I started I’ve had people recognize my name and actually get in contact with me to offer photography gigs, mostly in the Phoenix area where I live now. Getting my name out there helped people get a sense of my work, and that has translated into a lot of paid opportunities to be an official photographer for various events. Some of these include the Arizona Chamber of Commerce, Western Journalism, Conservative Review, Reason Magazine, the Mises Institute, Campaign for Liberty, the Iowa GOP, several different centers at Arizona State University, and some freelance work that has allowed me to photograph people like the President of the United States.

    I’m always excited to see what presents itself day by day, and it really all goes back to my involvement with Creative Commons that first allowed me to get my name out there and break into a field that is constantly changing and evolving.

    Bernie and Jane Sanders, Photo by Gage Skidmore CC by SA 2.0

    The post Spotlight on Gage Skidmore, political photographer appeared first on Creative Commons blog.

    New Chilean law would make it harder for authors to freely share audiovisual works

    Creativecommons.org -

    2° Feria Tecnológica Audiovisual DuocUC by il_tommy, CC BY-NC-ND 2.0

    In May we learned that Chile’s Chamber of Deputies approved an amendment to a bill that would create a new, unwaivable right of remuneration for authors of audiovisual works. The law would apply to all audiovisual works, even those published under open licenses. This would mean that audio and video creators are supposed to be compensated even if they do not wish to receive royalties. Creative Commons and CC Chile are concerned that the bill could create unnecessary complexity for authors who want to share their works under CC licenses.

    Of course authors should be able to be paid for their work. But with over 1 billion CC licensed works on the web, we also know that many authors simply want to share their creativity freely under open terms to benefit the public. For example, educators and scholarly researchers create and share works primarily to advance education and to contribute to their field of study—not necessarily for financial remuneration.

    All CC licensors permit their works to used for at least non-commercial purposes. When an author applies a Creative Commons licenses to her work, she grants to the public a worldwide, royalty-free license to use the work under certain terms. The license text specifically states, “To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties.”

    Creative Commons and CC Chile sent a letter [English] [Spanish] to the Senate Education and Culture Committee stating our opposition to the legislation. We respectfully requested that the Senate vote against the bill, or offer an amendment so that authors may continue to share audiovisual works under Creative Commons licenses without imposing an additional burden such as having to agree to an unwaivable right for remuneration.

    The bill is moving through the senate committee, and you can take action now to tell Chilean lawmakers to keep open licensing options for video creators. 

    The post New Chilean law would make it harder for authors to freely share audiovisual works appeared first on Creative Commons blog.

    Redefining Open: MOOCs and Online Courseware in the Age of Creative Commons and Wikipedia

    Creativecommons.org -

    The guest post below was written by Peter B. Kaufman of Intelligent Television.

    When the Open Courseware movement first started – its Big Bang probably took place in mid-June 2001, when Mellon Foundation president William G. Bowen, Hewlett Foundation’s president Paul Brest, and MIT president Charles M. Vest announced the initiative at MIT – our understanding of rights and licensing and the full range of our opportunities for accessing and sharing knowledge was more primitive than it is today. We didn’t yet know truly how to share knowledge online, nor did we know how to permit, license, and further facilitate the use, reuse, and remix of our content. It would be two years before Creative Commons licenses, also launched in 2001, would grace a million works. And it would be five years before Wikipedia, also founded in 2001, would publish its millionth English-language article.

    Today, almost 15 years later, a new order of magnitude is required to calculate the extent of the commons. Wikipedia and its sister projects have seen more than 2.6 billion edits to date; now the online, open encyclopedia gains over 10 edits per second – 20,000 articles per month worldwide – and English Wikipedia alone averages 800 new articles posted per day. Creative Commons has more than a billion licenses in circulation. CC-licensed works were, according to CC, viewed online 136 billion times last year alone, and the growth in the use of this content worldwide, while still challenging to track, appears to be commensurate.

    So how is it that today’s edition of open courseware – massive open online courses – don’t really intersect with the commons? Today there are thousands of hours of academy-produced video online – together representing the investment of tens of millions of dollars by universities and other cultural and educational institutions in online educational media. And, since 2001, major philanthropic foundations – Ford, Gates, Hewlett – and U.S. federal government agencies have accelerated open licensing mandates for their grantees. Yet most of the open courses and open courseware projects that universities are producing to date, and most of the ones that they are producing today, are far from truly open: far from being able to be welcomed by the keepers of the commons into the legally shareable universe, far from being licensed in ways that make them free. Open Courseware launched at MIT, where Richard Stallman, the visionary of free software and oft-cited inspiration behind Wikipedia and CC, keeps his office, yet most MOOCs, like most university video, lie outside the commons, and are destined to stay outside unless we do something.

    The “Redefining Open” Project, part of a larger advocacy initiative on opening educational video that Intelligent Television is leading with core support from the William and Flora Hewlett Foundation, explores why MOOCs are not open as the open in their name might suggest and puts forth suggestions about what might be done to help. Over the next three months the project will review the licensing frameworks for open courseware to date; analyze the rights anatomy of educational video; describe the state of educational media production and distribution in 2016; and address how production, distribution, archiving, and preservation processes might be changed to achieve greater openness and greater return on investment for many of the institutions funding MOOC development today. In October 2016 the project will present a series of next steps for MOOC producers to realize the promise that the founders of Open Courseware first envisioned 15 years ago.

    About the author

    Peter B. Kaufman is founder and executive producer of Intelligent Television in New York and former associate director of the Columbia University Center for Teaching and Learning. He served as conference co-chair of LEARNING WITH MOOCS II and is the author of, among other works, “Video on Wikipedia and the Open Web: A Guide for Cultural and Educational Institutions” for the Ford Foundation, The New Enlightenment: The Promise of Film and Video in the Digital Age, and, also with the support of the Hewlett Foundation, The Manual of Video Style.

    The post Redefining Open: MOOCs and Online Courseware in the Age of Creative Commons and Wikipedia appeared first on Creative Commons blog.

    Tell the European Commission to #FixCopyright

    Creativecommons.org -

    This post was remixed from the blog of the Communia Association, whose content is dedicated to the public domain.

    Through the Communia Association, Creative Commons and several CC Europe affiliates have responded to the copyright reform consultations of the European Commission. Currently, the Commission is asking for feedback on the “role of publishers in the copyright value chain” and on “freedom of panorama”. The window for providing responses ends on June 15. Communia has already submitted its detailed response. We think the Commission should stop the harmful link tax and support commonsense sharing of publicly viewable cultural works.

    It’s important that the Commission hears from you! Be sure to submit your responses to the survey by 15 June. There is a guide to assist you in answering the questions at http://youcan.fixcopyright.eu/.

    Ancillary copyright = Link tax

    The Commission is considering introducing a new right which would permit content publishers to extract fees from search engines for incorporating short snippets of—or even linking to—news articles. This is why the measure is called a “link tax.”  

    Adopting new rights for publishers above and beyond the extensive rights they already enjoy under copyright law would be dangerous and counterproductive. Spain and Germany have already experimented with similar versions of the link tax, and neither resulted in increased revenues for publishers. Instead, it likely decreased the visibility (and by extension, revenues) of their content—exactly the opposite of what was intended.

    Not only is a link tax bad for business, it would undermine the intention of authors who wish to share without additional strings attached, such as creators who want to share works under Creative Commons licenses.

    Adopting a new neighboring right for publishers would harm journalists who rely on information-gathering and reporting tools like news aggregators, services like Google Alerts, and social media. It would have significant negative consequences for researchers and educational institutions by adding an unnecessary layer of rights that will make it more difficult for educators and researchers to understand how they can use content as part of their education and research activities.

    Finally, the adoption of a link tax would create additional barriers for users and online information-seekers. Many users that rely on curated news aggregators like Google News, or even RSS readers or other apps that reproduce snippets of content from news articles. If an additional right for publishers is established, users would find that these existing news products and services will likely be disrupted, their prices increased, or even discontinued altogether (as we’ve seen in Spain with Google News). Popular social networking apps and websites used by hundreds of millions of people could be negatively affected too.

    Freedom of Panorama: Commonsense rules for sharing culture

    Freedom of panorama refers to the legal right to take and share photos, video, and images of architecture, sculptures and other works which are located in a public place. The sharing of photos taken in public places is an example of an everyday activity that should not be regulated by copyright. We know that the lack of harmonization around the freedom of panorama has negatively affected users who wish to share images of public architecture and sculpture on sites like Wikipedia. We support the adoption of a broad right for freedom of panorama, and it should apply to both commercial and noncommercial uses of images of architecture, sculpture, and other objects in public spaces. The exception should be mandatory across the EU, and should cover both online and offline uses.

    Make your voice heard!

    Time is running out to tell the Commission to do the right thing: No additional rights for publishers; protect and expand freedom of panorama. Be sure to check out http://youcan.fixcopyright.eu/ and submit your responses by June 15.

    The post Tell the European Commission to #FixCopyright appeared first on Creative Commons blog.

    CC Australia Supports Commission Recommendations for User-friendly Copyright Reform

    Creativecommons.org -

    This post was contributed by Stuart Efstathis for Creative Commons Australia.

    Image by Sierra_Graphic, CC0

    The Australian Productivity Commission has recommended important changes to Australian copyright law that support content creators and users in the digital age. On 29 April 2016, the Commission released a Draft Report on reforms to Australia’s intellectual property laws based on the principles of effectiveness, efficiency, adaptability and accountability. Creative Commons Australia strongly supports the passage of the Copyright Amendment (Disability and Other Measures) Bill 2016, as recommended by the Commission. That Bill will introduce extensions to copyright safe harbours and simplify the existing statutory license provisions. We also support the Commission’s draft recommendation to introduce a fair use exception into Australian law.

    The Commission’s Recommendations

    The Productivity Commission concluded that “Australia’s IP system is out of kilter, favouring rights holders over users and does not align with how people use IP in the modern era”. The Draft Report contained a number of useful recommendations that would make Australia’s outdated copyright laws relevant in the digital age:

    • Australia should introduce a fair use exception to copyright. Fair use should replace the current fair dealing exceptions and ensure copyright laws regulate “only those instances of infringement that would undermine the ordinary exploitation of a work at the time of the infringement”;
    • Under current Australian law, copyright in unpublished works lasts forever. This should be removed, allowing full use of orphan and out of print works;
    • Circumvention of technologies designed to control geographic markets for digital content should not be unlawful. The law requires clarification;
    • All publications funded by State and Federal governments, directly or through university funding, should be free to access through an open access repository within 12 months of publication; and
    • Copyright safe harbours should be expanded to include all online service providers without an expansion of liability for copyright authorisation.
    Creative Commons Australia’s Submissions

    Creative Commons Australia made submissions in response on 3 June 2016, supporting many of the Productivity Commission’s recommendations. CCAU’s submissions were guided by three key principles: to ensure access to and use of content is not unnecessarily restricted; that creation and innovation is encouraged; and that open access and open licensing is supported.

    Fair Use

    Australia needs a fair use exception to address the needs of consumers and creators of content in a digital market. Consumers and creators need support for new expression, which necessarily builds upon existing knowledge, culture, and expression. CCAU fully supports the implementation of the replacement of fair dealing with a fair use exception. Fair use is a flexible exception more suited to the digital age and is likely to align better with consumer and creator expectations for reasonable content use. Fair use encourages the use of content for innovative purposes, reflecting the primary objective of copyright. The Australian Law Reform Commission has issued an extensive report recommending the introduction of fair use and the Productivity Commission has supported this.

    Copyright Term and international law reform

    Australian copyright law has steadily increased its focus on protecting rights holders over the last two decades. The Productivity Commission suggests that this is reflected in the recent extension of copyright terms from life of the author plus 50 years, to life plus 70 years. The Commission notes that this move imposed a significant cost on consumers with no corresponding public benefit. The difficulty in reforming this area is due to an overlapping web of international agreements that entrench the minimum term of copyright protection (including the Berne Convention, TRIPS, the Australian-US Free Trade Agreement, and the Trans-Pacific Partnership Agreement). As a result, Australia does not have the ability to independently determine the appropriate extents of our national copyright law. CCAU recommends a start to the difficult process of disentangling intellectual property laws from international agreements that do not advance national interests.

    Unpublished Works

    CCAU supports the recommendations of the Productivity Commission removing the perpetual copyright protection afforded to unpublished works under Australian law. A significant amount of Australian cultural heritage remains unjustifiably locked up in unpublished work. This content cannot be digitised, archived, preserved, or reused. This can be rectified by the passage of the Copyright Amendment (Disability and Other Measures) Bill 2016.

    Geo-Blocking and the ‘Australia Tax’

    Australian consumers experience higher prices, long delays, and a lack of competition in digital content distribution markets. This is known as the ‘Australian Tax’. Under current law, it is not always clear whether Australians have the right to circumvent geoblocking technology to access media goods and services sold in other markets. CCAU recommends that Australian law be clarified in this regard, and supports an amendment to the Copyright Act to include exemptions for all types of media, in the encouragement of a competitive digital market in Australia.

    Open Access

    CCAU supports open access to articles, research and data. Open access improves research efficiency, provides assurance of greater scientific integrity, and reduces the overall costs of research infrastructure. For information to be useful, rights to re-use this content need to be clearly detailed through the use of open licensing. This can be achieved through the use of Creative Commons licensing.

    Safe Harbours

    Australian creators are currently disadvantaged by safe harbour exceptions that are too narrow to allow distribution of content in the digital market. Safe harbours provide the legal certainty required for content hosts to distribute creator content. Enacting laws which promote legal access and broader use of copyright content is also the most effective way to reduce infringing activity. CCAU supports the extension of safe harbours to all online service providers.

    The post CC Australia Supports Commission Recommendations for User-friendly Copyright Reform appeared first on Creative Commons blog.

    Ofrer personvern og grunnleggende demokratiske rettigheter i kampen mot terror!

    GoOpen.no -

    Denne uken leverte Lysne II-utvalget sine anbefaler til regjeringen om å innføre full overvåking av datatrafikken inn og ut av Norge. Dette er et tiltak som representerer et betydelig inngrep i enkeltmenneskers privatliv, samtidig som det vil kunne få store samfunnsmessige konsekvenser. Skulle forslagene i rapporten bli vedtatt, mener jeg vi vil være vitne til at våre digitalt inkompetente politikere ofrer personvern og grunnleggende demokratiske rettigheter i kampen mot terror.

    Dramatisk skifte

    Uavhengig av det endelige utfallet av forslaget fra Lysne-utvalget, har vi de siste årene sett en dramatisk endring i den politiske viljen til å ofre personvernet gjennom overvåkning av mennesker som i utgangspunktet ikke er mistenkt for å ha gjort noe straffbart. Internasjonalt har vi sett mange eksempler på misbruk av etterretningsinformasjon og eksempler på at de som utøver overvåkning på ingen måte overholder de lover og regler som er definert for å begrense skadevirkningene. Vi må også kunne stille spørsmål om de politiske organene som skal ha oppsyn med for eksempel E-tjenesten faktisk har tilstrekkelig kompetanse. Rapporten stadfester at E-tjenesten sitter med ledende eksperter på sikkerhet og overvåking. Jeg tillater meg altså å stille spørsmålet om Stortingets kontrollorganene har den samme kompetansen.

    Hvor går grensen?

    Det blir helt avgjørende at vi får en åpen debatt om disse spørsmålene, og vi må i langt større grad enn Lysne-utvalget gjør i sin rapport, diskutere de viktige prinsipielle spørsmålene knyttet til den massive overvåkningen som enkelte samfunnsgrupper vil bli utsatt for. Hvor går grensen for hva vi som enkeltmennesker skal kunne akseptere av overvåkning og i hvilke situasjoner er det helt avgjørende at statlige organer ikke skal kunne overvåke det vi gjør. Det er for meg komplett uforståelig hvordan man kan foreslå et så radialt tiltak til regjeringen, uten å samtidig være helt tydelig på hvordan man tenker å sikre at for eksempel journalister og politiske partier ikke blir overvåkning. Vi har allerede eksempler i Norge på at film og fotomateriale fra journalister har blitt beslaglagt, noe jeg mener representerer nok en brutt barriere.

    Vernesoner

    Den økende graden av sentral digital overvåkning aktualiserer også behovet for «digitale vernesoner» hvor enkeltmennesker kan ferdes fritt. Konsekvensen ved å ikke ta denne diskusjonen er at en relativt liten gruppe mennesker i PST, E-tjenesten og politiet sitter med utrolig mye makt ved at veldig mange mennesker overvåkes.

    Jeg er også meget kritisk til hvordan utvalget kan ta så lett på enkelte veldig viktige spørsmål. I rapporten står det for eksempel: «Hendelser har vist at det ikke er mulig å lage noen elektroniske systemer som er fullt ut sikre mot datainnbrudd. Reduksjon av risiko for at uvedkommende får tilgang til data og utstyr må derfor ha høy prioritet.» Med dette stadfester altså utvalget selv et kritisk problem uten at de adresserer mulige konsekvenser annet en overfladisk. Hva er faktisk konsekvensen hvis disse dataene skulle komme på avveie? Skulle man sammenlignet dette med verning av natur, vil det være som å foreslå bygging av et gasskraftverk på toppen av fuglefjellet ved Runde og bare nøye seg med å si at vi må være snill med Lundefuglene som hekker der.

    Rapportens mangler

    Etter å ha lest rapporten sitter jeg med noen helt grunnleggende spørsmål som jeg mener utvalgets rapport ikke berører i tilstrekkelig grad:

    1. Har denne overvåkningen virkelig den ønskede effekten, altså å forhindre terror? Det finnes mange eksempler som viser at for eksempel «al Qaida», en organisasjon det vises til i rapporten, og andre terrororganisasjoner er relativt kompetente med tanke på å unngå å legge etter seg digitale spor. 
Det er også veldig mange eksempler på at etterretningen har fanget opp informasjon men ikke klart å sette inn tiltak i tide.

    2. Hvordan sikrer vi at loven ikke utvides uten tilstrekkelig politisk behandling ettersom nye behov oppstår? Rapporten beskriver dette under punktet «formålsutglidning», og erfaringen fra for eksempel Sverige viser at nettopp denne typen utvidelser relativt raskt blir aktuelt. Rapporten konkluderer vagt med at ikke alle endringer er «feil», noe som i seg selv understreker min bekymring.

    3. Hvordan sikrer vi at loven som regulerer overvåkningen faktisk følges. Ser vi på saker knyttet til politiets overvåkning av mennesker i straffesaker finner vi mange eksempler på at politiet på ingen måte følger de lovene som er satt i forbindelse med for eksempel sletting av data.

    4. Hvordan sikrer vi full åpenhet rundt kontrollmekanismene knyttet til denne overvåkningen og enda viktigere hvordan sikrer vi at de politiske organene som skal kontrollere har tilstrekkelig kompetanse. Her beskriver rapporten en modell som noen ganger krever forhåndsgodkjenning av domstol mens man i andre tilfeller blir kontrollert i etterkant.

    Det er mange dilemmaer når man først gir noen «alle fullmakter». Vi kan prøve å se for oss en situasjon hvor Stortinget gjennom sine kontrollorganer planlegger et ettersyn av E-tjenesten etter at de har bygget en stor organisasjon knyttet til denne massive overvåkningen. Hvordan kan da Stortinget være helt sikker på at de selv ikke blir overvåket av den samme institusjonen som de er satt til å kontrollere?

    Relevant lenker:

    https://www.regjeringen.no/globalassets/departementene/fd/dokumenter/lysne-ii-utvalgets-rapport-2016.pdf

    http://www.digi.no/artikler/utvalg-gar-inn-for-overvaking-av-all-datatrafikk-ut-og-inn-av-norge/351200

     

    Digital delingskultur løser problemet for lag og foreninger som kreves for penger etter bilde-tabbe på 17. mai

    GoOpen.no -

    Denne teksten ble også publisert på følgende nettaviser første uken i juni 2016: ItPro, Itromsø, Computerworld og Tidens krav.

    Aftenposten skriver denne uken om Loddefjord idrettslag som brukte et bilde av det norske flagget i forbindelse med 17. mai uten å spørre fotografen om lov. Dette kostet dem dyrt, noe som er både trist og unødvendig ettersom det finns gode alternativer uten kostnad eller risiko for å havne i retten.

    I en verden hvor det har blitt vanlig å dele bilder både på sosiale medier og egne nettsider er det viktig å være bevisst på konsekvensene av å bruke et bilde med copyright uten tillatelse fra opphavsmannen. Dette fikk Loddefjord idrettslag smertelig erfaring med i forbindelse med årets 17. mai-feiring. Etter at de brukte et bilde tatt av fotograf Martine Petra Hoel måtte de punge ut med 5.000 kroner. Fana IL gjorde samme tabben og fikk en regning på 10.000 kroner. Det er mange lag og foreninger over hele landet som har havnet i samme situasjon.

    Personlig mener jeg fotografen i dette tilfellet utnytter en utdatert lov og krever en alt for høy sum basert på at noen har gjort en liten tabbe. Det er allikevel lag og foreninger selv som ansvaret her, selv om det er helt unødvendig av dem å sette seg i denne knipen. Det gledelige er nemlig at det finnes en veldig enkel løsningen på problemet. Den digitale delingskulturen er i dag godt utviklet. Denne delingskulturen bygger på at bilder og andre kilder blir underlagt det som kalles en fri lisens.

    Den mest brukte av disse er Creative Commons. Denne lisensen gir alle som ønsker det lov til å gjenbruke bilder, film og tekst uten å spørre om lov, men under gitte forutsetninger. Tillatelsen for å gjenbruke har opphavsmannen gitt på forhånd ved å bruke denne lisensen. Denne globale delingskulturen drives frem av frivillige bidragsytere som nettopp ønsker at deres bilder, filmer eller tekster skal kunne gjenbrukes av andre. Nettsider som Wikipedia og Pixabay.com tilbyr i dag et stort antall bilder av høy kvalitet under forskjellige frie lisenser.

    Ser man for eksempel etter et bilde av det norske flagget på Wikipedia vil man blant annet finne et bilde tatt av fotografen Hans-Petter Fjeld. Hans-Petter er en av mange frivillig som gjør en fantastisk jobb for å sørge for at den norske versjonen av Wikipedia har denne typen bilder.

    Foto: Hans Petter Fjeld, CC BY-SA 2.5

    Jeg jobber til daglig i Nasjonal Digital Læringsarena (NDLA) som er et fylkeskommunal samarbeid for å utvikle digitale læringsressurser for videregående opplæring. For oss er den digitale delingskulturen en del av vår strategi. Dette betyr i praksis at vi deler det vi selv utvikler av innhold under en fri lisens, samtidig som vi gjerne gjenbruker bilder som andre har delt.

    Når vi i redaksjonen hos NDLA trengte et bilde av et flagg til en av våre artikler brukte vi det tidligere nevnte bildet fra Wikipedia som Hans-Petter har delt. For Loddefjord idrettslag eller Fana IL ville det vært helt gratis og fritt å bruke det samme bildet – helt uten risiko for å havne i retten eller motta en stor faktura i posten.

    Council of the European Union calls for full open access to scientific research by 2020

    Creativecommons.org -

    Science! by Alexandro Lacadena, CC BY-NC-ND 2.0

    A few weeks ago we wrote about how the European Union is pushing ahead its support for open access to EU-funded scientific research and data. Today at the meeting of the Council of the European Union, the Council reinforced the commitment to making all scientific articles and data openly accessible and reusable by 2020. In its communication, the Council offered several conclusions on the transition towards an open science system:

    • ACKNOWLEDGES that open science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of Europe;
    • INVITES the Commission and the Member States to explore legal possibilities for measures in this respect and promote the use of licensing models, such as Creative Commons, for scientific publications and research data sets;
    • WELCOMES open access to scientific publications as the option by default for publishing the results of publicly funded research;
    • AGREES to further promote the mainstreaming of open access to scientific publications by continuing to support a transition to immediate open access as the default by 2020;
    • ENCOURAGES the Member States, the Commission and stakeholders to set optimal reuse of research data as the point of departure, whilst recognising the needs for different access regimes because of Intellectual Property Rights, personal data protection and confidentiality, security concerns, as well as global economic competitiveness and other legitimate interests.

    You can read the rest of the conclusions here. Crucially, the Council said that “open access to scientific publications” will be interpreted as being aligned to the definition laid out in the Budapest Open Access Initiative: free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.

    The post Council of the European Union calls for full open access to scientific research by 2020 appeared first on Creative Commons blog.

    Uruguayan rights holders seek to roll back progressive copyright reform

    Creativecommons.org -

    Law, by Woody Hibbard, CC BY 2.0

    Uruguay is in the process of updating its copyright law, and in April a bill was preliminarily approved in the Senate. The law introduces changes that would benefit students, librarians, researchers, and the general public by legalizing commonplace digital practices, adding orphan works exceptions, and removing criminal penalties for minor copyright infringements. University students were the original proponents of the limitations and exceptions bill.

    But after its initial approval, collecting societies and publishers created a stir in the media to roll back the bill. And yesterday, a document was released that outlines the views of the author’s collecting society (AGADU), the organization representing book publishers (CUL), and the university students (FEUU).

    According to CC Uruguay, these organizations have come to an “agreement” that would remove or modify many of the positive portions of the bill. The changes would have far-reaching negative consequences for users, educational institutions, libraries, and the public. They include:

    • Eliminating the exception that permits copying for personal use. This could make illegal everyday practices such as making personal backups or format-shifting legally-acquired content.
    • Retaining the possibility for criminal penalties for minor infringements. This could mean that users that are technically infringing but who do not create any financial harm to the author could still be liable for monetary damages of up to 45,000 US Dollars, or even imprisonment. This could include harmless, widespread social practices like downloading files without intent to distribute or profit from them. However, it should be noted that the existing Senate bill recommends that such matters be handled via civil—not criminal—law.
    • Drastically limiting the scope of exceptions and limitations for education. Their recommendations seek to eliminate the ability for teachers to make translations or adaptations of copyrighted works within their educational institutions. For those uses that are permitted, they want to restrict the scope of the exemption to cover only reproducing short portions (up to 30 pages) of textbooks and “educational materials”. And the organizations say that only public educational entities should be able to take advantage of the copyright exception. Private and community educational institutions would be excluded. However, the current Senate bill is more supportive of exceptions and limitations for education. It permits both translations and adaptations of copyrighted works within educational institutions. It also does not discriminate against private and community institutions. Furthermore, the Senate version does not limit reproductions to only “educational materials”. This is important in order to take into account the wide variety of resources that are necessary for instruction in higher education today, but which might not fit with a traditional definition of “educational.” For example, music students need access to musical works, and many other subject areas need to be able access and use fragments of literary, scientific, and philosophical works. Finally, the Senate bill does not impose an arbitrary page limit for how much of a copyrighted work may be reproduced. Instead, it allows for greater flexibility in how much may be used; if there is a dispute, a judge will be able to assess whether the use was reasonable—taking into account the specific context of the educational use.
    • Adding severe restrictions on libraries. The recommendations seek to permit public lending only for written works. This would mean that it would restrict the public lending of musical, audiovisual, and photographic works. The Senate bill already legalizes the public lending of software.The coalition suggests that the law should be changed from permitting public lending for nonprofit purposes to to lending “whose activities do not directly or indirectly involve any commercial purpose”. This change could further restricting the ability for libraries to lend materials.Furthermore, reproductions of copyrighted works made by libraries at the request of a user would also be subject to the arbitrary 30-page limit. Finally, the Senate bill allows libraries to make a copy of a work for replacement purposes when the work is no longer available at a reasonable market price. The group of organizations suggesting the changes wants to eliminate this provision.
    • Enacting restrictions on freedom of panorama. The Senate bill legalizes a broad freedom of panorama—which means that anyone is permitted to draw, photograph, film, or create 3D models of architectural works, monuments, and works of arts exhibited permanently in public places. However, the coalition wants to restrict freedom of panorama for only non-commercial uses. This would mean that photographers, filmmakers, or artists who want to market their own works containing public monuments and architecture would be violating the law if they didn’t get permission from the rightsholder in the underlying work.

    CC Uruguay believes that the recommended changes would be harmful for users, educational institutions, libraries, and the public. The changes would eliminate two of the most important protections in the Senate reform bill: the decriminalization of non-commercial infringement, and personal-use copying. The changes would also severely restrict other exceptions and limitations to copyright, including those for education, library lending, and freedom of panorama.

    Their document recommends scaling back most of the user-friendly provisions in the bill, cuts other items that were drafted by the Council of Copyright in the Ministry of Education and Culture—and which already received unanimous political support by all parties in the Senate.

    CC Uruguay thinks that Senate policymakers should view these recommended changes as only one voice among many stakeholders. Decisionmakers must also take into account the diversity of voices from educational institutions, libraries, and civil society organizations. The laws regulating access to creativity and culture should support the needs and interests of the public, and should be reached through a broad and democratic debate among all stakeholders.

    The post Uruguayan rights holders seek to roll back progressive copyright reform appeared first on Creative Commons blog.

    Sider

    Abonner på creativecommons.no nyhetsinnsamler - Internasjonale nyheter