Internasjonale nyheter

CC Search: A New Vision, Strategy & Roadmap for 2019 -

At the Grand Re-Opening of the Public Domain at the Internet Archive, I teased a new product vision for CC Search that gets more specific than our ultimate goal of providing access to all 1.4 billion CC licensed and public domain works on the web. I’m pleased to present that refined vision, which is focused on building a product that promotes not just discovery, but reuse of openly-licensed and public domain works. We want your feedback in making it a reality. What kinds of images do you most need and desire to reuse when creating your own works? Along that vein, what organizational collections would you like to see us prioritizing for inclusion? Where can we make the biggest difference for you and your fellow creators?


Our 2019 vision is:

“CC Search is a leading tool for creators looking to discover and reuse free resources with greater ease and confidence.”

The vision centers on reuse — CC will prioritize and build for users who seek to not only discover free resources in the commons, but who seek to reuse these resources with greater ease and confidence, and for whom in particular the rights status of these works may be important. This approach means that CC will shift from its “quantity first” approach (front door to 1.4 billion works) to prioritizing content that is more relevant and engaging to creators.

We made our assumptions based on a combination of user research, whatever quantitative data we could get our hands on (e.g. analytics on previous iterations of search), and pure conjecture (based on anecdotal evidence from our community), or what in the lean start-up world is called a leap of faith.

How we expect reuse to happen

The base catalog is the database of all CC works we are continuing to gather and grow. We envision users will be able to access this catalog in three ways:

  1. Through CC Search — the default front end you see now.
  2. Through some curation on CC Search — you could imagine different portals for different kinds of users, e.g. educators seeking open textbooks.
  3. Through CC Search being integrated directly into other sites and software via a CC API, e.g. CC Search in Google Docs.

Once the user accesses the work, the user takes the next step to reuse the work. They download it, which means they make a copy. The user who is also a creator takes a step further; they attribute the author of the work in their new creation, ideally through the automatic and easy ways we provide for them to do this. Both download and attribution are ways a user reuses the work in a way that implicates copyright and thereby the Creative Commons license. And both are potential ways we can learn how that work is used in the wild.

Through learning about how CC works are reused, we will be able to validate our hypotheses and know we are on the right track (or not). We will also be better able to tell the story or journey of the works’ impact, which speaks to a key insight from our user research:

“People like seeing how their work is used, where it goes, and who it touches, but have no easy way to find this out.”

This learning is the hard part of our work, and what we still need to figure out. How do we track and learn about reuse in a way that is effective, but also aligns with our values and respects user privacy?

User research & usability testing

In 2019, we will focus on images and texts, with a stretch goal of including audio files. Accordingly, we will focus any user research and usability testing on groups of people that reuse these works in a meaningful way, specifically, “Creators making new works using existing free content.” A few we will start with are:

  • Creators making designs, imagery and art works (commercial or independent)
  • Creators illustrating a text or text-based resource (blog, journalistic articles, educational/academic texts or presentations)
  • Creators making a video

We’ll also being doing some separate user research to add open texts, which is a different bucket of people than the creators above, because we think (but don’t know) that most people seeking open texts are really seeking access, and not reuse, when it comes to CC Search. For example, we think that community college faculty looking for open textbooks are mainly seeking to access all open textbooks in one place.

As we talk to users, collect user feedback, and conduct usability testing, we may learn differently.


Based on this new 2019 vision and strategy, here are some of our key deliverables for the year.

The complete roadmap is available here, which also includes a pipeline of ideas. The pipeline of ideas is the master list of ideas from the community that we will revisit at the end of each quarter to decide what makes it in the roadmap. The roadmap is an evolving document and we welcome your comments and feedback.

The Team Follow the arrows from upper left: Kriti, Sophine, Alden, Breno, Sarah, Jane

The current CC Search team is led by CC’s Director of Engineering, Kriti Godey, and myself, CC’s Director of Product and Research. The other members are Sophine Clachar (Data Engineer), Alden Page (Back End Engineer), Breno Ferreira (Front End Engineer) and Sarah Pearson (Product Counsel).

Get involved

We are growing a vibrant community of open source developers and users willing to test and provide feedback on CC Search.

If you’re a current or potential user of CC Search, join the #cc-usability channel at the Creative Commons Slack ( where we regularly engage the group for feedback on new features.

If you’re a developer, check out Creative Commons Open Source, a hub for the CC developer community and the #cc-developers channel at the Creative Commons Slack.

The post CC Search: A New Vision, Strategy & Roadmap for 2019 appeared first on Creative Commons.

Use and Fair Use: Statement on shared images in facial recognition AI -

Yesterday, NBC News published a story about IBM’s work on improving diversity in facial recognition technology and the dataset that they gathered to further this work. The dataset includes links to one million photos from Flickr, many or all of which were apparently shared under a Creative Commons license. Some Flickr users were dismayed to learn that IBM had used their photos to train the AI, and had questions about the ethics, privacy implications, and fair use of such a dataset being used for algorithmic training. We are reaching out to IBM to understand their use of the images, and to share the concerns of our community.

CC is dedicated to facilitating greater openness for the common good. In general, we believe that the use of publicly available data on the Internet has led to greater innovation, collaboration, and creativity. But there are also real concerns that data can be used for negative activities or negative outcomes.

While we do not have all the facts regarding the IBM dataset, we are aware that fair use allows all types of content to be used freely, and that all types of content are collected and used every day to train and develop AI. CC licenses were designed to address a specific constraint, which they do very well: unlocking restrictive copyright. But copyright is not a good tool to protect individual privacy, to address research ethics in AI development, or to regulate the use of surveillance tools employed online. Those issues rightly belong in the public policy space, and good solutions will consider both the law and the community norms of CC licenses and content shared online in general.

I hope we will use this moment to build on the important principles and values of sharing, and engage in discussion with those using our content in objectionable ways, and to speak out on and help shape positive outcomes on the important issues of privacy, surveillance, and AI that impact the sharing of works on the web.

We are taking this opportunity to speak to this particular type of reuse – improving artificial intelligence tools designed for facial recognition through the reuse of content found on the web (not just CC-licensed content) – to help clarify how the licenses work in this context. We have published new FAQs here that we will continue to update.

If you have comments or questions, please write CC at We will also be creating other opportunities to engage in public discussion in the coming weeks and months. We look forward to joining these discussions as we look for ways to resolve ethical public policy issues around data, AI, and machine learning as a community.

The post Use and Fair Use: Statement on shared images in facial recognition AI appeared first on Creative Commons.

Big Flickr Announcement: All CC-licensed images will be protected -

I’m happy to share Flickr’s announcement today that all CC-licensed and public domain images on the platform will be protected and exempted from upload limits. This includes images uploaded in the past, as well as those yet to be shared. In effect, this means that CC-licensed images and public domain works will always be free on Flickr for any users to upload and share.

Flickr is one of the most important repositories of openly-licensed content on the web, with over 500M images in their collection, shared by millions of photographers, libraries, archives, and museums around the world. The company was an early adopter of CC licenses, and was bought by Yahoo! and later sold to Verizon. Last year, Flickr was sold again, this time to a family-owned photo service called SmugMug. Many were justifiably concerned about the future of Flickr, an essential component of the digital Commons.

Once the sale of Flickr was announced, CC began working closely with Don and Ben MacAskill of SmugMug, Flicker’s new owners, to protect the works that users have shared. Last November, Flickr posted that they were moving to a new paid service model that would restrict the number of free uploads to 1,000 images. Many, including Creative Commons, were concerned this could cause millions of works in the Commons to be deleted. We continued to work with Flickr, and a week later, they announced that CC-licensed images that had already been shared on the platform would be exempted from upload limits.

Today’s announcement takes that commitment one step further, and ensures that every CC-licensed or public domain image shared on Flickr is protected for all to use and re-use. It’s a significant commitment. Don and Ben MacAskill and the whole Flickr team have been supportive of CC and Flickr’s responsibility to steward the Commons from day one, and have been open and collaborative with Creative Commons all along.

For users of Flickr (and no doubt also for Flickr staff) it’s been a tumultuous time. Migrating to new business models is difficult, and will undoubtedly anger some users, especially those used to getting things for free. However, we’ve seen how unsustainable and exploitative free models can be, and I’m glad that Flickr hasn’t turned to surveillance capitalism as the business model for its sustainability plan – but that does mean they’ll have to explore other options.

Choosing to allow all CC-licensed and public domain works to be uploaded and shared without restrictions or limits comes at a real financial cost to Flickr, which is paid in part by their Pro users. We believe that it’s a valuable investment in the global community of free culture and open knowledge, and it’s a gift to everyone. We’re grateful for the ongoing investment and enthusiasm from the entire Flickr team, and their commitment to support users who choose to share their works. We will continue to work together to help educate Flickr’s users about their options when sharing works online, and to support the communities contributing to the growth and preservation of a vibrant collection of openly-licensed and public domain works.

The post Big Flickr Announcement: All CC-licensed images will be protected appeared first on Creative Commons.

We Couldn’t Do It without YOU

Plos -

Every year, we get to work with new authors, reviewers, and editors who are ushering in the next wave of scientific advancement. We love publishing your work, reading your reviews, and learning from your expertise and we just want to say THANK YOU for supporting PLOS.

Wow, did we really do all of that?

We did! This has been a banner year for PLOS journals. In 2018 we saw more research articles published in PLOS Biology than ever before, began publishing Topic Pages in PLOS Genetics and Benchmarking articles in PLOS Computational Biology, partnered with bioRxiv to post over 1,300 preprints, and committed to moving forward with published and signed peer review. That’s on top of all of the special issues, Calls for Papers, and collections we’ve published in topics ranging from Climate Change and Health to Gender and NTDs.

We’d also like to extend a warm welcome to more than 3,000 new members of the PLOS ONE Editorial Board who have joined us this year to provide more expertise for submission areas that need it most – we’re glad you’re here!

For everything we do at PLOS, we are supported by the dedication of our research communities.

Together, we’re stronger

We are a community of more than 8,000 editors, 65,000 reviewers, and 150,000 authors. When we work together, we can make change happen in scholarly communication. Last year PLOS Pathogens editors hosted six writing workshops to help Early Career Researchers improve their skills and equip them with the tools they need to become authors. We also hosted interactive events like live-streamed preprint journal clubs to bring authors and experts from the community together for real-time feedback on their work.

We’re listening to your feedback from our surveys, event meetups, and Section Calls and want to continue evolving our services in ways that matter to you.

We’re working on new ways for reviewers to get credit for their work through ORCiD as well as signed and published peer reviews. We’re also going to continue the process improvements we’ve started on PLOS ONE to bring a faster, clearer process to our authors along with a number of exciting new options on other journals – stay tuned!

Cite it, share it, celebrate it

For everyone who has contributed to our success this year: our dedicated Editorial Board, incredible Guest Editors, and inspiring reviewers – these articles are for you!

We’re sure we will have many more opportunities to thank you this year but please join us in celebrating your achievements this week by sharing your PLOS contributions with #PLOSCommunity.

CC + Google Summer of Code 2019 -

We are proud to announce that Creative Commons has been accepted as a mentor organization for the 2019 Google Summer of Code program.

Google Summer of Code (GSoC) is a annual global program through which Google awards stipends to university students who write code for free and open-source software projects during their school break. CC has been a mentor organization for GSoC on seven previous occasions, but our last participation was in 2013, so we are glad to be reviving the tradition and hosting students again.

We’ve compiled a list of project ideas for students to choose from when submitting their work proposal. There’s a lot of variety to choose from – adding features to CC Search, reviving older CC products, creating entirely new tools that increase the reach of CC licenses, figuring out ways to better present our legal and technical work, and more. There is definitely room for creativity – the project ideas are defined in broad terms, and students may also choose to submit a proposal for an original idea.

One of the goals of the CC engineering team this year is to build an active developer community around our projects. We’ve been writing free and open-source software for over a decade. Lately, we haven’t done the best job of enabling external developers to contribute to those projects. Hosting Google Summer of Code is our first step to change that for the better, and we’re also actively working on several other improvements to our code and processes, such as:

  • Creative Commons Open Source, a hub for the CC developer community.
  • Making CC Search’s development more transparent. Our current sprint workload is already public and we’ll be releasing a roadmap soon.
  • General cleanup, documentation, and contribution guidelines for our projects.
  • A technical blog.

If you want to stay updated on our work, join our brand new developer mailing list, the #creativecommons-dev IRC channel on freenode, or the #cc-developers and #cc-gsoc channels on our Slack community. And if you’re a student (or know a student), please consider submitting a Google Summer of Code proposal! It’s a great way to get an introduction to open-source, build real-world skills, work on interesting technical challenges, and help advance CC’s mission.

The post CC + Google Summer of Code 2019 appeared first on Creative Commons.

Celebrate Open Data Day with Us!

Plos -


Around the world tomorrow groups from all sectors will be celebrating Open Data Day – an annual event that highlights the benefits of open data and encourages the adoption of open data policies in government, business and civil society. As publishers, data availability is crucial for the validation and foundation of new research and key to our mission to help researchers advance the scientific record. In the spirit of Open Data Day, we’ve decided to defer to researchers and data enthusiasts to answer the questions: why is data important and how can we make it better?

We got our answers from Sudhakaran Prabakaran, one of a number of dedicated volunteers at Cambridge University known as the Data Champions who are helping advise members of the research community on managing their data. Read his thoughts below.

Who are the Data Champions? What kinds of data questions do you help researchers navigate?

I think it’s a fantastic forum. [We have] a lot of discussions and people exchange ideas and not necessarily just in the sciences but also in every other field. [Data management] can be kind of confusing, even simple things like can I put my [datasets] in Dropbox? Can I share them in Google Drive? You’re talking about even labeling stuff in desktop computers. There is no clarity because this landscape is fast-moving and people are not trained to catch up with that kind of speed at which things change.

What are you working on right now? How does open data play a role in it?

Our lab thrives on open data. We train machine learning algorithms that looks at specific regions of the human genome and trying to identify the most important mutations and then identify drugs to target them. Most of the datasets I work with people have already published and analyzed those datasets and they’ve extracted what they want. I’m kind of looking at things that they don’t want – I’m looking at non-coding regions, just kind of digging deeper into the datasets.

Why do you think open data is important? What do you think the future open data landscape looks like?

I don’t think open data is enough…it’s the analysis also. For example, we train a lot of machine learning algorithms and in the process we fail many, many times and we know the pitfalls we know what to avoid. But if you share that process with other people that will enable them to overcome it, to get there. It is very difficult and that process can be shared with people.

I think future young people are going to be brought up in an environment where they can just click something and get access to the code and get access to the data themselves. And then the issues of reproducibility would be mitigated if you can share what you’ve done and the data set is there for other people to work with.

What advice would you give to authors and researchers to encourage them to share their data?

I think we have encountered these scenarios even as a data champion in my own department. I think if you if you have incentives, as in [getting] your DOI and authorships for the dataset even before publication then it’s easy to share. It’s your data, [someone] can probably do a different kind of analysis and publish it but they have to cite this data and you will be benefited by that.

And it’s in the best interest of the authors to share it ahead of time because of reproducibility.


What can you do to encourage good data management?

You can practice the open data lifestyle by sharing your research data in an open repository and making it available when you submit your manuscript. If you’re reviewing a submission, knowing how to evaluate the associated datasets can be tricky which is why we’ve worked with the Data Champions to cover everything you need to know in this Reviewer’s Quick Guide to Assessing Datasets. If the manuscript you’re reviewing doesn’t have an associated dataset, request it!

About the Data Champions Program

The Data Champions Programme is a network of volunteers who advise members of the research community on proper handling of research data. In this, they promote good research data management (RDM) and support Findable, Accessible, Interoperable, and Re-usable (FAIR) research principles. It is run by the Research Data Management Facility at the University of Cambridge.


Open Education Week: 24-Hour Global CC Network Web-a-thon: 5-6 March -











Open Education Week is an annual convening of the global open education movement to share ideas, new open education projects, and to raise awareness about open education and its impact on teaching and learning worldwide. Each year, the Creative Commons global community participates, hosts webinars, gives local talks and shares CC licensed educational resources.

As part of the event this year, the Creative Commons Open Education Platform and CC Poland are hosting a 24-Hour Web-a-thon: 5-6 March (depending on your time zone).

We have amazing speakers from around the world presenting in multiple languages. Experts from Algeria, Nigeria, Argentina, South Africa, Italy, Chile, United Kingdom, Afghanistan, United States, Ireland, Sweden, Canada and Poland will present their open education projects.

Time: All times are UTC (check your local time using

Webinar Room: All sessions will be in:

Day One – March 5

9:00-9:20 – Open Networked Learning – a collaborative open online course on open networked learning
Presentation of the Open Networked Learning course – an initiative from Karolinska Institutet, Lund University, Linnaeus University and Karlstad University with Partner Universities and Organizations in Brazil, Finland, Ireland, Singapore, South Africa, and Switzerland. In particular, we will focus on the new course homepage powered by WordPress and BuddyPress. Jörg Pareigis, Karlstad University

10:00-10:20 – Open Education Initiatives in Francophone North African countries
In this presentation, we will share the state of Open Education initiatives in Francophone North African countries. Kamel Belhamel, University of Bejaia

10:30-10:50 – Open for Educators: Stirring Action via Support Services
Adoption of OER and OEP by educators (K-12 to HEI) strongly depend on support services available. This presentation considers various support services needed by educators to start to shift and implement OER and OEP using a case study of educators in Nigeria. John Okewole, Yaba College of Technology

11:00-11:20 – Open Education to build a Latin American community of information professionals. Fernando Lopez, Aprender 3C

12:00-12:20 – Creating Educational Equity through OER and Open Degree Plans
This presentation will address OER and open degree as a means for reducing the high cost of earning a college degree, providing equity and access to higher education. Carolyn Stevenson, Purdue University Global

12:30-12:50 – Open is an Invitation: Exploring Use of OER with Ontario Post-Secondary Educators
In this short presentation, with lots of time for conversation, I will share the key findings of my doctoral research conducted in partnership with Ontario post-secondary educators in 2018. Jenni Hayman, Cambrian College

13:00-13:20 – Exploring the multiliteracies to support access to OER in South Africa
This presentation explores the specific multiliteracies required within a South African context in order to support epistemological and demiurgic access to OER. This research took the form of a conceptual study with an integrative literature review and document analysis of selected open educational resources and repositories. A broad framework of multiliteracies is presented for use within the Southern African context. Jako Olivier, North-West University

14:00-14:20 – Design process for an open educational resource: a case study
In this presentation we described an experience of creation of an open educational resource following an interactive design methodology, working with design students. The work was carried out within the framework of a university seminar, aimed at introducing students of audiovisual design in technical, legal and design issues and their articulation with pedagogical objectives in an interactive design process. The tasks carried out included the creation of a “user” profile of the target students, analysis of the use situation and the needs of the teachers. The process concluded with the creation of an OER prototype for use in secondary schools. Lila Pagola, Universidad Nacional de Villa María

14:30-14:50 – Open Education Cooperative Educoop
Presentation of the method of co-creation of open educational resources by teachers based on 4 values: cooperation, learning, openness and adventure. I will also show the effects of the first edition of the project. Aleksandra Czetwertyńska, Centrum Cyfrowe

15:00-15:20 – OEGlobal19 call for proposals – tracks and ideas
We are going to briefly present the main topic of next OE Global 19, the main tracks in the call for proposal and give our support to colleagues who might want to ask questions about how to submit their proposals, according to the different formats available this year. Susan Huggins, OE Consortium. Chrissi Nerantzi, University of Birmingham. Paola Corti, Politecnico di Milano.

15:30-15:50 – Open Education in Chile: small steps in an adverse context
Nosotros hablaremos sobre los pequeños pasos y complejos contextos en el donde la educación abierta en Chile, a transitado en los últimos años, con algunas experiencias interesantes y relevantes, ademas del esfuerzo en compromisos concretos a través de los Planes de Acción de Gobierno Abierto. En nuestra opinión, los desafíos presentes y futuros para la educación abierta en el país son enormes, existiendo proyectos e iniciativas importantes, las que esperamos puedan representar nuevos escenarios favorables de mayor equidad y calidad educativa para los estudiantes. Werner Westermann, Biblioteca del Congreso Nacional

16:30-16:50 – OER news from the UK and OER19
We will share news of developments of OER in the UK and the OER19 Conference, taking place in Ireland in April – we will share how you can participate remotely and highlight important new resources and research. Maren Deepwell, Association for Learning Technology

17:00-17:20 – OpenEd in Oklahoma
The purpose of this presentation will be to share the state of Open at Oklahoma State University where we have been, where we are going, and why. Cristina Colquhoun, Clarke Iakovakis, & Kathy Essiller, Oklahoma State University

17:30-17:50 – One adult student’s perspective on open education opportunities
Older than most teachers and administrators, I’m the first online student at Metropolitan State University’s College of Individualized Studies authorized to use my own eportfolio to demonstrate my prior learning for assessment to complete a bachelor’s degree. An EdTech intern, I study open learning technologies and heutagogy. (self-directed learning) Mark Corbett Wilson, Metropolitan State University

18:00-18:20 – A Quick Look at the Future of OER
This talk will look at the impact of new technologies – specifically, open data, cloud technologies, AI and distributed ledgers (blockchain) – on the future shape of OER – what they will look like, how they will be used, and what skills and knowledge will be needed to develop and use the. Stephen Downes, National Research Council Canada

18:30-18:50 – Going beyond the classroom: Digital Humanities OER powered by the European research infrastructure DARIAH
This talk will showcase two types of OER for Digital Humanities that allow for flexibility with different teaching/learning contexts, enable peer learning, and empower teachers and students
to see beyond their institutional perspectives. These are: the Parthenos Standardization Survival Kit and the OpenMethods metablog. Erzsébet Tóth-Czifra

19:00-19:20 – OER Momentum in the Rocky Mountains: Policy, Practice and Purpose
Colorado’s unique leadership with statewide OER efforts is steered by the OER Council, a legislatively created advisory group comprised of representatives from a variety of disciplines and institutional types. This session will highlight how a diverse group of individuals in the Rocky Mountain state have advocated and executed OER efforts at the state level, while also highlighting future ambitions in policy, practice and purpose. Meg Brown-Sica, Colorado State University. Brittany Dudek, Colorado Community College Online. Spencer Ellis, Colorado Department of Higher Education. Jonathan Poritz, Colorado State University-Pueblo.

19:30-19:50 – State of Open Data: Data and Education
This talk will showcase the findings of the chapter about data in education on the book State of Open Data. It aims at presenting the benefits of the use of open data in education, and its value for developing data literacies, but also, it highlights the risks of datafication of education with the aim of giving a wide landspace and perspectives on data in education. Javiera Atenas, ILDA

20:00-20:20 – An OER Library in Afghanistan
OER in Afghanistan? Yes, it’s true! For several years, we have been making and translating OER into Afghan languages, as part of the Darakht-e Danesh (‘knowledge tree’ library). We will tell you about our small but fierce digital library, we will share our lessons learned doing OER in this part of the world, and we will tell you about how we innovate around challenges like insecurity, connectivity and digital literacy. We’ll highlight some of our exciting future plans, and hopefully, leave you inspired. Lauryn Oates, Abdul Parwani, Darakht-e Danesh Library

20:30-20:50 – New open education initiatives in Ireland
Ireland’s ‘National Forum for the Enhancement of Teaching and Learning in Higher Education’ is a unique body, tasked with supporting & fostering T+L enhancement & collaboration across all HEI’s in Ireland. Terry & Catherine will describe national plans in the area of open education – and are open to ideas & feedback. Terry Maguire & Catherine Cronin, National Forum for the Enhancement of Teaching & Learning in Higher Education

21:00-21:20 – Alquimétricos, Open source DIY didactic building blocks
Near past, present and what’s next on building our open tech, didactic content, and branding model. Alquimétricos is a collaborative open project on designing, content developing and DIY (handcraft or digital) fabricating of tech-oriented didactic materials. A word on sustainability on open tangible stuff. Fernando Daguanno, Alquimétricos

21:30-21:50 – OER19 Conference – themes & conversations
Following on from earlier presentation by Maren Deepwell & Martin Hawksey, Catherine (and an OER19 guest, TBC) will explore themes of the upcoming OER19 Conference taking place in Galway, Ireland, April 10-11. The overall conference theme is: ‘Recentering Open: Critical and Global Perspectives’. Catherine Cronin, National Forum for the Enhancement of Teaching & Learning in Higher Education

22:00-22:20 – State of Open Education in Canada
We will share the projects and initiatives happening in Canada around open education. Specifically looking at Provincial initiatives in postsecondary education as well as policies in Open Education in Canada. We will also highlight what is next for Canada, and what we hope to see for the future of Open Education. Amanda Coolidge, BCcampus. Lena Patterson, eCampusOntario.

22:30-22:50 – Creative Commons Certificates
The 10-week CC Certificates course for educators and librarians provides an in-depth study of CC licenses and develops participants’ open licensing proficiency and understanding of the broader context for open advocacy in the Commons. Will also discuss: new CC Certificates in process, facilitator training, translations, and scholarships. Cable Green, Creative Commons


7:00-7:20 – Equity-oriented Open Learning in the Marginal Syllabus
A presentation about equity-oriented open learning as supported by the Marginal Syllabus project. The presentation will review design and learning practices summarized in Kalir (2018). Additional information about open learning via the Marginal Syllabus project. Remi Kalir, University of Colorado Denver

10:00-10:20 – Online roundtable on Growing Open Education Policies in 2019
This session is an opportunity for all activist to join and briefly present their organisations and plans for 2019. Host of this session is Centrum Cyfrowe Foundation from Poland – we will present some details about Open Education Policy Forum 2019. Alek Tarkowski, Centrum Cyfrowe Foundation

Be sure to share your Open Education Week activities with: #OEWeek

See you online!

The post Open Education Week: 24-Hour Global CC Network Web-a-thon: 5-6 March appeared first on Creative Commons.

Open Education news from around the world #1

European Open EDU Policy Project -

The OE Global Conference organized by Open Education Consortium will take place in Milan, Italy, 26 – 28 November, 2019. The conference is devoted exclusively to open education, attracting researchers, practitioners, policy makers, educators and students from more than 35 countries to discuss and explore how Open Education advances educational practices around the world. The theme of the Conference is Open Education for an Open Future – Resources, Practices, Communities. Proposal submissions due: 1 May 2019.

The webinar series on Open PedagogyOpen Education Consortium and the State University of New York have teamed up to offer series of webinars (from February 20 to April 30) which mainly seek to:

  • highlight examples of open pedagogy practices that may be replicated
  • facilitate and support a community of practice for Open Pedagogy The agenda is available here.

Did you know that eCampusOntario Open Library provides educators and learners with access to more than 250 free and openly-licensed textbooks? The library is now integrated with publishing infrastructure, allowing easy creation or adaption of OER, version tracking, cloning support and interactive content.

The Open Education Week (March 4-8) is approaching! OE Week is a celebration of the global Open Education Movement – gives us an annual opportunity to show the world what’s happening with open education. The list of all events is available here:

  1. The 2nd Global Open Education Web Conference
  2. Critical thinking with Europeana webinar: the use of primary sources as an effective technique to spot fake news
  3. The story of the Open University in Europe and the world
  4. Ongoing initiatives for Open Education in Europe

‘How do we define success?’ – Rethinking failure and success in science

Plos -

Independent of the context, failure is a word that hardly ever leaves us indifferent. Fear of failure is human nature, and it is common that we prefer not to talk about failures if we can avoid it.  When we think about this in a professional context, failure can have clear and immediate ramifications for reputation and career progression and – as with any other professional – researchers are not immune to this fear of failure

Part of this approach to failure in research is due to the fact that the research system has traditionally rewarded those who are the first to report a finding over those who are second, and those who report a positive result over those reporting a negative one. However, research generally involves a trial-and-error approach and a plethora of negative findings, or protocols that require troubleshooting before they are fine-tuned. Thus ‘failed’ experiments are common; more so than is often recognized or reported. Much effort and many hours of meticulous research endeavour go unrecognized by the current research assessment frameworks, resulting in a considerable squandering of potentially important research outcomes.

The ‘Failures: Key to success in science’ event at the Cambridge Festival of Ideas 2018 aimed to reflect on these considerations in a conversation involving our five panelists and the audience around the notions of failure and success in science.

Our five panellists kicked off the conversation by giving their perspective on what a successful research career should look like.

‘How do we define success?’ asked Cathy Sorbara (Co-chair of CamAWiSE, Cambridge for Women in Science and Engineering), maintaining that science does not have a defined endpoint, that collaboration should be a key part of the research process, and that scientists need to think about how they communicate their work, particularly to those unfamiliar with research. Tapoka Mkandawire (PhD candidate, Sanger Institute) felt that a key aspect of success is to work on something that you feel passionate about and are keen to share. The audience was interested in the forms that communication of research could take and the panellists noted that communication about research should not be restricted to publications, putting forward ideas around visual formats such as videos. Tapoka noted that her research group has developed a comic book to more easily describe their work to children.

A common theme was that the binary classification of success vs failure is somewhat unfair. Should a result be tagged as failure only because it’s negative and not been published? Fiona Hutton (Head of STM Open Access Publishing at Cambridge University Press) advocated the development of a more collaborative open pathway for research, with more openness at all steps of the process, such as that demonstrated with open lab notebooks, to capture the incremental steps that make up the research process. The sharing of negative and null results should be encouraged as well, as a move away from frameworks that rely on impact factors to assess the quality of research; Fiona mentioned DORA as a good initiative in this space, which is gaining support from institutions and funders.

Arthur Smith (Deputy Manager of Scholarly Communication (Open Access), University of Cambridge) and Stephen Eglen (Reader in Computational Neuroscience, University of Cambridge) tackled the challenges with the current research system and acknowledged that this places Principal Investigators (PIs) as the ‘survivors’ of the system, with only a few reaching the top of a steep pyramidal career structure. Stephen stressed that the driving force for getting into research should be a genuine interest in science and not the goal to eventually become a PI. Arthur noted that there are many other career paths available after a PhD and that the skills gained can be used in many other areas, such as the private sector. The training of PhD students should include aspects that go beyond publishing, and should balance this with the development of communication and other skills.

To round up the discussion we asked panellists to provide recommendations for steps that can help shift perceptions about success and failure in science. Here is what they told us:

  • More support for early career researchers, so that they can have an informed, broader view of their career, and of the options after a PhD.
  • Further recognition for the wide range of different roles that scientists play beyond the publication of research findings – for example, peer review activities, mentorship, etc.
  • Provision of credit for recording and reporting troubleshooting, for any work that may not follow the shape of a conventional publication but which would help others engaged in related research.
  • More training for those in a research path, to help them develop a variety of transferable skills, and to recognize the value of those skills.
  • Increased diversity – higher diversity can only be beneficial in driving change towards how success is defined.

Achieving these aims and helping to sway current views about failures in research represents a formidable task, but – much like science itself – change progresses one step at a time and we hope that the engaging conversation at the Festival of Ideas provided one such step to shift how we define “success” in research. As we pursue initiatives towards such change, let’s remind ourselves of Arthur Smith’s definition of success: ‘Success is what makes you happy’.


Announcing Plays Well With Others, a new podcast about the Art and Science of Collaboration -

I’m thrilled to share the first episode of our podcast, Plays Well with Others, with our community. It’s about the art, science, and mechanics of collaboration.

Ask yourself: How often have you walked into a room where you were about to work with colleagues, friends, or even strangers, and thought, “I’m going to focus on being a great collaborator today”? We spend so much time on leadership, and hardly any time on helping each other do great work together.

We hope to change that, in our own small way, with Plays Well with Others.

I couldn’t be more excited about this project — I’ve always wanted to produce radio journalism. I love interviewing people and helping them tell the best version of their stories. And it’s been a joy to work with my colleague and collaborator Eric Steuer on the podcast’s design and development. I’ve loved the opportunity to do creative work, and to work directly on something like this that is close to my heart, and that I feel is really good. We’re incredibly proud of how it’s turned out, and we hope you’ll enjoy it and learn something along the way.

Collaboration is a natural topic for me. My job at CC is all about making collaboration happen, and it’s been at the centre of my work for my entire career — across cultures and timezones. I’m fascinated by the things that we can only do together. Collective action — from the power of a union to build a more equitable world, to the ability of shared public investments to strengthen communities, to the potential of the commons to democratize knowledge — these ideas inspire me. It’s why I do what I do.

The first episode of our show focuses on the writer’s room on a comedy show. Our guests, Anne Lane and Wayne Federman, deserve our thanks for trusting us with their stories. They had no idea if what we were making would be any good, and there was no body of work to look at, since it was our first episode. Anne and Wayne were open and honest, and they gave us a view into a special room that very few people get to visit. But they also helped reveal insights into how collaboration works in groups: how to deal with the disappointment of watching your joke fall flat; the importance of picking up what other collaborators are laying down (“yes, and”); and an acknowledgment that many ideas, combined with the need to ship an episode, yields better results than writing alone in your basement.

I hope each episode yields such great insights — that’s certainly our goal. Our operating thesis is that digging into how collaboration really works might help us all become better collaborators ourselves.

Each episode homes in on an element of collaboration, told by some of the world’s great collaborators. We’ll publish one episode each month, and the first season will likely run around eight episodes. I hope you’ll join us along the way, and that we’ll all learn some new tactics, techniques, and approaches — together.

Listen now

The post Announcing Plays Well With Others, a new podcast about the Art and Science of Collaboration appeared first on Creative Commons.

EU copyright directive moves into critical final stage -

Journalists at work in the European Parliament in Strasbourg, CC BY-ND 2.0

In September 2018 the European Parliament voted to approve drastic changes to copyright law that would negatively affect creativity, freedom of expression, research, and sharing across the EU. Over the last few months the Parliament, Commission, and Council (representing the Member State governments) were engaged in secret talks to come up with a reconciled version of the copyright directive text.

The closed-door “trilogue” negotiations are now complete and a final compromise has been reached. The text is not yet published but MEP Julia Reda has shared unofficial versions of Article 13 (upload filters) and Article 11 (press publishers right). Both of these carried through with no major improvements on behalf of user rights and the public interest.

For an overview and evaluation of the key issues, check out Communia’s website:


Article 13 and 11: Still bad (or worse) for creators and users

It’s more clear than ever: Article 13 will require nearly all for-profit web platforms that permit user uploads to install copyright filters and censor content. While there was an earlier version that included an exclusion for small companies, that provision has been reeled in. Now only services that have been operating for less than 3 years, with annual revenue below €10 million, and with fewer than 5 million unique visitors each month, will be excluded from the rule. And the filters need to process all types of content — from music to text to images to software — anything that can be protected by copyright. If platforms don’t take action, they assume liability for what their uses publish online. This will surely harm creativity and freedom of expression in Europe. Some types of services will be exempted, for example Wikipedia, or open source software platforms such as GitHub. But for the vast majority of online platforms in Europe this will mean more regulatory burden and costs, and it will make it more difficult to compete with the big established platforms.

Article 11 got no better. It would force news aggregators to pay publishers for linking to their stories. The counterproductive press publishers right would last for 2 years. The text claims that the right will not apply to “individual words or very short extracts of a press publication.” At least openly licensed works such as those under Creative Commons or in the public domain would be exempted.

What’s next?

The final text of the directive will be released soon. While the trilogue negotiators focused on Articles 13 and 11, there were some productive changes that will improve the situation of the commons, cultural heritage, and research sectors. For example, we know that the negotiators agreed upon a provision to ensure that reproductions of works in the public domain will also be in the public domain. They included text to improve the ability for cultural heritage institutions to better serve their users online. And the negotiators slightly improved the exception on text and data mining by making mandatory an earlier optional provision that would expand the possibilities for those wishing to conduct TDM.

The European Parliament elections are coming up in May, and the existing Parliament will vote on the final text of the copyright directive beforehand. The plenary vote will take place between late-March and mid-April. This is when all 751 MEPs will get a chance to vote Yes or No on adopting the text as finalised by the trilogue.

With Article 13, it’s no exaggeration to say that it’ll fundamentally change the way people are able to use the internet and share online. And the European copyright changes will affect how copyright develops in the rest of the world. Even with some of the minor improvements to other aspects of the copyright file, it’s hard to see how the reform — taken as a whole — will be a net gain except for the most powerful special interests.

There is still time to make your voice heard on stopping the harmful upload filters and press publishers right. If you’re in Europe, visit to get more information and contact your MEPs before the vote.

The post EU copyright directive moves into critical final stage appeared first on Creative Commons.

From Preprint to Publication

Plos -

Live preprint journal clubs provide early feedback for PLOS ONE authors

We love it when preprints go on to be accepted as formal journal publications and we are especially excited to announce that EMT network-based feature selection improves prognosis prediction in lung adenocarcinoma, a featured preprint in our Open Access week event, is now published in PLOS ONE!

In October, we celebrated OA Week’s theme of “Designing Equitable Foundations for Open Knowledge” by teaming up with PREreview to host virtual preprint journal clubs where researchers from around the world could share their expert opinions on preprints AND get credit for their reviews. Thanks to this event, the authors of this preprint received crowd-sourced feedback on their work even as their submission underwent formal peer review by PLOS ONE.

Lead author Borong Shao took advantage of the unique opportunity to participate in the discussion and we asked her to tell us what she thought of preprints and the virtual journal club experience. Read her thoughts below:

Can you tell us a little bit about your research? What made you decide to post the work as a preprint?

We were working on the topic of molecular signature identification using multiple Omics data. The reason why we posted our work was to let our new results reach the research community. Based on our experience, preprint works are also read and discussed by researchers, as well as the formally accepted ones.

How does your field or research community feel about preprints in general?

In my opinion, preprints are welcomed if the work has a great idea to share. This can assist or even inspire other researchers in their work without waiting for the article to be formally accepted.

Tell us about your experience discussing your preprint at a live journal club—How did you feel about the opportunity?

I was a bit nervous because I had no such experience before. I wondered whether the audience would have positive or negative opinions about my work, although I think my work has its value. I was excited too, because our work is read by researchers all over the world. Some of them are from a relevant but not the same discipline. I was curious to know their opinions on our manuscript.

Did you use any of the feedback from the virtual journal club? Did you find this kind of feedback useful in general?

Both my professor and me found the suggestions from the virtual journal club very helpful. They gave us useful advice from the viewpoints of both readers and researchers. Much of the feedback can be implemented in a short time to improve the quality of our work. Some other feedback can be learned and used in our future research. There were a few mistakes that we might not have found out, if not learned from PREreview feedback.


Preprints aren’t just helpful to authors– early comments from your community can also help editors at the journal conduct their evaluation of the work. PLOS ONE Academic Editor Aamir Ahmad had the opportunity to handle Dr. Shao’s submission and felt that the early feedback process was “a great initiative… the feedback was excellent in general and the authors did a good job of incorporating the changes.”

PLOS wholeheartedly supports preprints and the myriad benefits they offer researchers.  We’re making it easier for authors to share their work as a preprint, immediately upon submission, through our posting service in partnership with bioRxiv and we were happy to find another partner in PREreview who have pioneered live preprint journal clubs for early discussions like these to take place.

You can find more information on preprints here and live-streamed journal clubs here. Please also join us in congratulating Dr. Shao and her co-authors on their recent publication!


PLOS Provides Feedback on the Implementation of Plan S

Plos -

We welcome Plan S as a ‘decisive step towards the realisation of full open access’1, in particular the push it provides towards realization of a research process based on the principles of open science. This is fully aligned with our mission to bring scientists together to share work as rapidly and widely as possible, to advance science faster and to benefit society as a whole. Our publications have operated in line with the core principles outlined in Plan S since the launch of our first journal, PLOS Biology, in 2003. We recognize that wide adoption of support for Plan S may bring additional competition within the open access publishing space. We welcome this evolution as a positive change in research culture, resulting in greater availability of information, growing inclusion in the scientific process and increasing the speed of discovery and innovation.  Below is our response to the call for public feedback.

Feedback Questions

  1. Is there anything unclear or are there any issues that have not been addressed by the guidance document?

While welcoming Plan S, its principles and stated intentions, there are some points where we believe additional clarification would be beneficial.

A. Changing research assessment

We are glad to see emphasis on changing research assessment and commitment to the principles of DORA as part of Plan S. We believe this is critical to enabling change in publication behaviours, allowing the value of research outputs to be assessed on their merits rather than through an aggregated metric based on publication venue. However, we note that although the original publication of Plan S states that members of cOalition S ‘commit to fundamentally revise the incentive and reward system of science, using the San Francisco Declaration on Research Assessment (DORA) as a starting point’1, the implementation guidance states only that ‘cOAlition S members intend to sign DORA and implement those requirements in their policies.’ We ask cOalition S to provide clarity regarding the steps that will be taken to drive the ‘fundamental change’ indicated in the original publication.

B. Transformative agreements

While recognizing the need for a route for subscription journals to transition away from publication behind paywalls, we believe that without stringent guidelines and compliance checks, ‘transformative deals’ may have significant unintended consequences, reducing choice and narrowing the market. As identified by Adam Tickell in his 2015 review2 the need for ‘OA policy to offer greater choice to research producers’ remains, and we believe this should be a primary consideration for cOalition S in considering the future shape of the research and innovation market, particularly as it relates to the assessment and communication of research findings.

‘Transformative agreements’ offer advantage to the largest players and to publishers with substantial subscriptions business, as smaller publishers have to ‘wait in line’ to enter negotiations while those, including but not limited to PLOS, without legacy subscription businesses cannot participate. We acknowledge that the intention of cOalition S members is that ‘transformative agreements’ should not decrease the amount of money available in the system to fund publishing in other compliant venues, however, we believe this is the likely outcome as limited institutional and library publication budgets become tied into large ‘read and publish’/’publish and read’ (RAP/PAR) deals. This perpetuates the dominance of the ‘big deal’ in the market, which in its rebranded ‘publish and read’ form, has the potential to become the status quo rather than a step towards transformation, much as hybrid journals have become the status quo in relation to open access. Moreover, the transition of subscription ‘big deals’ into ‘RAP/PAR’ deals risks locking the high cost of subscriptions into an open access future, if deals so far are anything to judge by. We would like to see a ‘clear and time-specified commitment to a full Open Access transition’ as outlined in the implementation guidelines, be a central requirement for all journals covered by a ‘transformative agreement’ in order to be considered compliant with Plan S. We also ask for greater clarity on the allowed start and end dates for these agreements.

C. Deposition in open repositories

While we understand that this is a recommendation rather than a mandatory criterion for compliance with Plan S, we believe that the proposal that there be ‘direct deposition of publications by the publisher into Plan S compliant author designated or centralised Open Access repositories’ has the potential to add cost and complexity to compliance.

Currently, we and many other publishers syndicate our published articles to PMC/Europe PMC. The process of direct deposition to each repository is not without cost, requiring both staff and technical resources to set up and to run. These costs will increase should it become necessary to deposit to a range of ‘author designated’ repositories. This is especially the case given the importance of equitable treatment of publications from researchers in different disciplines and/or geographical regions, particularly as cOalition S grows.

We encourage cOalition S to reconsider this recommendation and propose deposition in a small number of recognized repositories or dispatch services, to facilitate compliance.

D. Publication costs and APC caps

In the published guidance, cOalition S calls for ‘full transparency and monitoring of Open Access publication costs and fees’ and indicates the potential for ‘standardisation of fees and/or APC caps’. We understand that the cOalition has revised this position and intends to call for transparency but not to introduce set caps. We welcome this change of approach which we would like to see reflected in the next iteration of the written guidance. We believe that requiring transparency will allow funders, or others paying the costs of publication, to assess the value of their payments while minimizing the opportunity to give rise to unintended consequences.

In considering potential unintended consequences, there is a useful parallel with the introduction of tuition fees at universities in England. Since tuition fees were introduced in 1998, they have been capped by the UK government. According to a House of Commons Library briefing paper3, each time that the cap has been raised, almost all English HEIs have increased their fees to the maximum allowed level. When it was announced that the cap would increase to £9,000 from 2012, Lord Willetts, then Minister for Universities and Science, said that the maximum fee would be charged only in ‘exceptional circumstances’4 and it was anticipated that this would ‘create a market in fees’5. This market did not emerge and in fact, nearly all HEIs set their fees at the maximum allowed rate. We believe that there is significant potential for an analogous situation to emerge in relation to APCs. Rather than creating a market where publishers set APCs at the lowest level that covers their costs sustainably, it is more likely that caps would encourage APCs to be set at the maximum allowed level even if this is substantially higher than the publisher’s costs.

Additionally, the cost associated with the publication of an individual article is highly variable dependent on publication venue. The level of editorial activity, including building relationships with, and providing support to, authors, referees and academic editors is a significant contributor to cost but generates substantial value for the research community. The level of selectivity of the journal or platform is also an influencing factor, as more selective publications incur additional costs through assessing articles that do not go on to be published in that venue. While we recognize and support the need to change the measure of selectivity from one focused on journal impact factors, we believe that the ability to differentiate levels of selectivity based on appropriate and meaningful criteria should continue, where selective  journals and platforms can demonstrate their value through community engagement and cost transparency. We believe that this will support a thriving research and innovation ecosystem more effectively than moving to a ‘one size fits all’ approach.

  1. Published simultaneously as follows: a) M. Schiltz, available from; b) Schiltz M (2018) Science without publication paywalls: cOAlition S for the realisation of full and immediate Open Access. PLoS Biol. 16(9): e3000031.; c) Schiltz M (2018) Science Without Publication Paywalls: cOAlition S for the Realisation of Full and Immediate Open Access. PLoS Med 15(9): e1002663.; d) Schiltz M (2018) Science Without Publication Paywalls: cOAlition S for the Realisation of Full and Immediate Open Access. Front. Neurosci. 12:656. doi: 10.3389/fnins.2018.00656
  2. Open access to research publications Independent Advice, Professor Adam Tickell Provost and Vice-Principal, University of Birmingham Chair of the Universities UK Open Access Coordination Group,
  3. House of Commons Library, Briefing Paper, Number 8151, 19 February 2018 Higher education tuition fees in England,
  4. As above p.6
  5. As above p.10, Section 3.2


  1. Are there other mechanisms or requirements funders should consider to foster full and immediate Open Access of research outputs?

We believe that diversity and equality of opportunity, including for new entrants to the market, should be retained and encouraged to ensure to a thriving and diverse research and innovation ecosystem. Referring to part (B) of our answer to Question 1, we encourage cOalition S to consider this opportunity to move from ‘big deal’-style arrangements as rapidly as possible to avoid further consolidation around the largest players in the market.

Focusing on regulation of existing business models, both transformative agreements and APCs, may have the unintended consequence of creating barriers to the diversification in the market. We applaud the the support indicated in the implementation guidance for ‘a diversity of models and non-APC based outlets’ and encourage cOalition S to ensure equal emphasis on the development of new business models, alongside consideration of established approaches. We believe this is vital in order to maintain choice for researchers.

Is it possible to decolonize the Commons? An interview with Jane Anderson of Local Contexts -

Traditional Knowledge Labels

Joining us at the Creative Commons Global Summit in 2018, NYU professor and legal scholar Jane Anderson presented the collaborative project “Local Contexts,” “an initiative to support Native, First Nations, Aboriginal, Inuit, Metis and Indigenous communities in the management of their intellectual property and cultural heritage specifically within the digital environment.” The wide-ranging panel touched on the need for practical strategies for Indigenous communities to reclaim their rights and assert sovereignty over their own intellectual property.

Anderson’s work on Local Contexts is a collaboration with Kim Christen, creator of the Mukurtu content management system and Director of the Center for Digital Scholarship and Curation at Washington State University. Local Contexts is both a legal and educational project that engages with the specific challenges and difficulties that copyright poses for Indigenous peoples seeking to access, use and control the circulation of cultural heritage. Inspired by the intervention of Creative Commons licenses at the level of metadata, the Traditional Knowledge Labels recast intellectual property as culturally determinant and dependent upon cultural consent to use of materials.

How can we have an open movement that works for everyone, not only the most powerful? How have power structures historically worked against Indigenous communities, and how can the Creative Commons community work to change this historic inequality?

Jane Anderson discussed these issues as well as some of her more recent work with the Passamaquoddy Tribe in Maine with Creative Commons.

Your project recasts the Creative Commons vision of “universal access to research and education and full participation in culture” through a local and culturally determinant lens. How is the vision of Local Contexts complementary to the CC vision, and how does it come into conflict?
The Local Contexts initiative began in 2010 when Kim Christen and I started to think more carefully about how to support Indigenous communities to address the immense and growing problems being experienced with copyright around Indigenous or traditional knowledge. We had both been working with Indigenous peoples, communities and organizations over a long period of time and had increasingly been engaged in a very specific way with the dilemmas of copyright that existed at the intersection of Indigenous collections in archives, libraries and museums. We were able to see more clearly the ways in which copyright has functioned as a key tool for dispossessing Indigenous peoples of their rights as holders, custodians, authorities and owners of their knowledge and culture.

Combining both legal and educational components, Local Contexts has two objectives. Firstly, to support Indigenous decision-making and governance frameworks for determining ownership, access to and culturally appropriate conditions for sharing historical and contemporary collections of Indigenous material and digital culture. Secondly, to trouble existing classificatory, curatorial and display paradigms for museums, libraries and archives that hold extensive Indigenous collections by finding new pathways for Indigenous names, perspectives, rules of circulation and the sharing culture to be included and expressed within public records.

Inspired by Creative Commons, we began trying to address the gap for Indigenous communities and copyright law by thinking about licenses as an option to support Indigenous communities.

Our initial impulse was to craft several new licenses in ways that incorporated local community protocols around the sharing of knowledge. Pretty quickly however we ran into a significant problem: with the majority of photographs, sound recordings, films, manuscripts, language materials that had been amassed and collected about Indigenous peoples, and that were now being digitized, Indigenous peoples were not the copyright holders. Instead, copyright was held by the researchers, missionaries or government officials who had done the documenting or by the institutions where these materials were now held. Or – at the other end of the spectrum, these materials were in the unique space that copyright makes – the public domain. This meant that not only did Indigenous peoples have no control over these materials and their circulatory futures, they also could not apply any licenses – either CC ones or ones that we were developing. This was a problem that we responded to by developing the TK Labels.

Why is it important to problematize the ways in which universal access can undermine cultural participation, particularly for traditionally marginalized groups?
Local Contexts is an effort to initiate questions about how ideas of the universal operate by pointing to sites of difference and locality, especially in how knowledge is shared, circulated and expanded. The vision of Local Contexts emphasizes specificity – that the circulation of knowledge and culture depends upon relationships and context – and if these relationships are formed unevenly, or privilege one cultural perspective above another, then that inequity continues to create a range of future problems.

One of the motivations behind Local Contexts, and this is an interesting question for Creative Commons to consider as well is: what would it look like if we invested time and support to Indigenous communities who have been disproportionately affected by colonial property laws – including copyright. How does access and openness perpetuate a colonial agenda of taking? And what can be done to change this direction? Where does the Creative Commons community come in to help think through these issues in conversation with Indigenous peoples and through Indigenous experiences? Is it possible to decolonize the commons? What would it look like to imagine a commons that is not totally open, but one that has an informed and engaged approach to openness; one that foregrounds the histories and exclusions embedded within calls for openness and open access. What would it mean to ask questions about the privilege that openness calls for and embeds?

We believe that Local Contexts is one of many efforts that are needed in order to take on this expansive problem. If you start thinking about what kind of information has been taken (through unethical and inappropriate research practices for instance) from Indigenous peoples, communities, lands and territories – and how this has been done without consent and permission, it is possible to start seeing the extent of the problem. For example, Indigenous names have been used for names of cars (Cherokee); for software (Apache); for varieties of strawberries (Sto:lo). For Indigenous peoples, names are not just words in common, they have embedded temporal and relational meanings including integral connections to place. For Indigenous peoples, names matter and are not open for others to use in ways that minimize and reduce them for commercial gain. How have settler-colonial laws and social frameworks created the conditions where no permissions are required to use Indigenous culture? What is the impetus to use Indigenous culture in these ways? Who benefits from using these exclusionary and extractionist logics?

Reciprocal Curation Workflow

How does information colonialism impact the communities where you work? How are you working to mitigate exploitation of cultural resources?
Information colonialism is an everyday problem for all the communities with whom we work and collaborate. It is not only the legacies of past research practices, but how these are continued into the present. There are more researchers working in Indigenous communities now than there were at the height of initial colonial documenting encounters from the 1850s onwards – and the same logics of extraction through research largely continue. This means that many of the same problems that we are trying to address in Local Contexts – namely the making of research derived from Indigenous knowledge and participation often conducted on Indigenous lands as owned by non-Indigenous peoples – continues.

There is an enormous need to support Indigenous communities as they build their own unique IP strategies and provide resources that assist in this project. At Local Contexts we are committed to this work and we provide as many resources as we can towards this end. Importantly we work directly with communities, and the resources we produce and offer come from these partnerships. We continue to develop tools and resources from direct engagement with communities. Partnering with the Penobscot Nation we just received an IMLS grant to run IP training and support workshops for US based tribes over the next two years. These trainings center Indigenous experiences with copyright law and the difficulties communities have negotiating with cultural institutions over incorporating cultural authority into how these Indigenous collections are to be circulated into the future.

The 6 CC licenses are designed to be simple and self-explanatory, but there are 17 Traditional Knowledge labels and four licenses, creating an intentionally local and culturally dependent information ecosystem. As a project both inspired by the Creative Commons licenses and in conversation with them, how do these labels better serve the contexts in which you work?
The 17 TK Labels that we have reflect partnerships that have identified these protocols as ones that that matter for communities in the diverse circulation routes for knowledge. What is important about the TK Labels part of the Local Contexts initiative is that they are deliberately not licenses. That is, we are not limited by the cultural (in)capacities of the law. Indigenous protocols around the use of knowledge are nuanced and complex and do not map easily onto current legal frameworks. For instance, some information should never be shared outside a community context, some information is culturally sensitive, some information is gendered, and some has specific familial responsibilities for how it is shared. Some information should only be heard at specific times of the year and still for other information, responsibility for use is shared across multiple communities.

The Labels embrace this epistemological complexity in a different kind of way – and they allow for flexibility as well as community specificity to be incorporated in ways that settler-colonial law cannot accommodate.

For instance, a central pivot of the TK Labels project from CC is that the TK Label icon remains static, but the text that accompanies each Label can be uniquely customized by each community and they maintain the control and the authority over the text. This is the sovereign right that every community has to determine and express their unique cultural protocols. Alongside this, the TK Labels also expand the meaning of certain kinds of legal terms, which have been historically treated as normative – for instance, attribution. With the TK Labels – attribution is usually the first label that a community identifies and adapts for their own purposes. This is because it has been Indigenous names – community, individual, familial that have been left out of the catalogue and the metadata. For example, the Sq’ewlets, a band of Sto:lo in Canada translated attribution as skwix qas te téméxw, which literally means name and place. (See how they use their labels.) For the Passamaquoddy Tribe in Maine, attribution is Elihatsik translated “to fix it properly”. The intention in the Passamaquoddy meaning of attribution is a specific call out for addressing mistakes in an institutional and therefore also in settler cultural memory.

What is one interesting outcome of your recent work?
One of our most important recent projects has been working with the Passamaquoddy Tribe to digitally repatriate and correct the cultural authority that the Passamaquoddy people continue to assert over the first Native American ethnographic sound recordings ever made in the US in 1890. When these recordings were made by a young researcher who visited the community for three days, they functioned as a sound experiment allowing for greater documentation of Indigenous peoples languages and cultures. The recordings were never made for the Passamaquoddy community but for researchers in institutions. This is evidenced by the fact that these recordings were not returned to the community until the 1980s – some 90 years after they were made. When this initial return, on cassette tapes, happened in the 1980s, the quality of the sound was poor. For community members thrilled to hear ancestors again after so long it was simultaneously heartbreaking not being able to hear what was being said.

From Passamaquoddy People Website

In 2015, the Library of Congress’ National Audiovisual Conservation Center (NAVCC) included these cylinders in their digital preservation program for American and Native American heritage. At the same time as this preservation work was initiated, the American Folklife Center at the Library of Congress, the Passamaquoddy Tribe, Local Contexts and Mukurtu CMS joined together for the Ancestral Voices Project funded by the Arcadia Foundation. This project involved working with Passamaquoddy Elders and language speakers to listen, translate and retitle the recordings in English and, for the first time in the historical record in Passamaquoddy; explaining and updating institutional knowledge about the legal and cultural rights in these recordings; adding missing and incomplete information and metadata; fixing mistakes in the Federal Cylinder Project record and implementing three Passamaquoddy TK Labels. These add additional cultural information to the rights field of the digital record – in both the MARC record and in Dublin core – and provide ongoing support for how these recordings will circulate digitally into the future.

Library of Congress record with TK labels

Changing how these recordings are understood in the Library of Congress and in the metadata into the future was only one part of this project. A complimentary part was working with the Passamaquoddy community to create their own digital platform for the cylinders, embedding them and relating them to other Passamaquoddy cultural heritage. The Passamaquoddy site utilizes the Mukurtu CMS platform and allows for differentiated access at a community level and for various other publics. It does not assume that everything created by Passamaquoddy people is for everyone, including non-Passamaquoddy people. It embeds Passamaquoddy cultural protocols as the primary means for managing access according to Passamaquoddy laws. This is then what is also translated into the Library of Congress through the TK Labels.

Working with Passamaquoddy Elders and language speakers to decipher the cylinders and for tribal members to now be singing these songs and teaching them to their children was what the work within this project required. When the Passamaquoddy recordings with community determined metadata and TK Labels were launched at the Library of Congress in May 2018, Dwayne Tomah called on the strength of his ancestors, and sang a song that had not been sung for 128 years. The ongoing strength of Passamaquoddy culture, language and Passamaquoddy survivance was felt by everyone who was in the room that day. The TK Labels were an important piece of this project as they functioned as a tool to support the correcting of a significant mistake in the historical record: namely that the Passamaquoddy people unreservedly retain authority over their culture which had been literally taken and authored by a white researcher from 1890 until 2018. (Read more in the New Yorker.)

What are you working on now?
At an international and national level, the TK Labels are an intervention directed at the level of metadata—the same intervention that propelled CC licenses to the reach they have today. Our current work at Local Contexts is threefold. We are finalizing the TK Label Hub. This will allow for a more widespread implementation of the TK Labels. It will be the place where communities can customize their Labels and safely deliver them to the institutions that request them and are committed to implementing them within their own institutional infrastructures and public displays. Our current work with the Abbe Museum in Maine will see the TK Labels integrated into the Past Perfect software as well, allowing for implementation across a wide museum sector. We continue to expand our education work on IP law and Indigenous collections for communities as well as institutions. More generally we believe that any education on copyright must have the history and consequences of excluding Indigenous peoples from this body of law incorporated into how it is taught and understood.

Finally we just developed 2 specific labels for cultural institutions. The Cultural Institution (CI) Labels are specifically for archives, museums, libraries and universities who are engaging in processes of collaboration and trust building with Indigenous and other marginalized communities who have been excluded and written out of the record through colonial processes of documentation and record keeping.

These CI Labels, alongside the TK Labels for communities and our education/training initiatives help close the circle, so that the future circulation of these cultural heritage materials, that have been held outside of communities, can be informed through relationships of care, responsibility and authority that reside within the local contexts where this material continues to have extensive cultural meaning.

Read more about the role that CC licenses play in the dissemination of traditional knowledge from our research fellow Mehtab Khan and listen to Jane Anderson speak about her work with the Passamaquoddy archives on the podcast “Artist in the Archive.”

The post Is it possible to decolonize the Commons? An interview with Jane Anderson of Local Contexts appeared first on Creative Commons.

CC0 at the Cleveland Museum of Art: 30,000 high quality digital images now available -

The Cleveland Museum of Art is one of the most visited art museums in the world, and soon it will become one of the most important online collections as well. Today, we are announcing a release of 30,000 high quality, free and open digital images from the museum’s collection under CC0 and available via their API. CC0 allows anyone to use, re-use, and remix a work without restriction.

In line with the museum’s mission to work “for the benefit of all people in the Digital Age,” the Cleveland Museum is leading the charge for comprehensive metadata and open access policy. The museum sees its role as not only providing access, but also creating sincere partnerships that increase utility and relevance in our time.

Creative Commons CEO Ryan Merkley joined museum director William M. Griswold and Chief Digital and Information Officer Jane Alexander at the CMA to announce this release. “I hope this model of working closely together with visionary organizations will be one that we can replicate with other museums, and that this will become the new standard by which institutions share and engage with the public online,” he said. The museum’s leadership echoed the sentiment.

“Open Access with Creative Commons will provide countless new opportunities to engage with works of art in our collection. With this move, we have transformed not only access to the CMA’s collection, but also its usability—inside as well as outside the walls of our museum,” said Griswold.

The newly released images and their associated metadata can also be viewed on CC Search, the Creative Commons image portal that provides access to millions of CC Images from 21 providers. This portal is currently in development and growing, and the Cleveland Museum of Art’s images provide another access point for billions of learners around the world to experience and enjoy cultural heritage. In this release, the CMA joins other institutions that have made the choice to share, including the Metropolitan Museum of Art and the Art Institute of Chicago.

Highlights from the Cleveland Museum of Art’s collection include Claude Monet’s “Water Lilies (Agapanthus)”, William Merritt Chase’s “Portrait of Dora Wheeler,” Albrecht Dürer’s “The Four Horsemen, from the Apocalypse”, and many important works of Indian, African, and Asian art. Our profound thanks to the staff of the CMA for making this partnership possible. This release was due to their hard work and leadership, and we look forward to continued partnership with this important cultural institution.

Watch our social media and Slack for collection highlights and more information, and experience the collection yourself at CC Search.

The post CC0 at the Cleveland Museum of Art: 30,000 high quality digital images now available appeared first on Creative Commons.

Boosting Open Science Hardware in an academic context: opportunities and challenges

Plos -

Written by: Jenny Molloy (University of Cambridge), Juan Pedro Maestre (University of Texas, Austin)

Experimental science is typically dependent on hardware: equipment, sensors and machines. Open Science Hardware means sharing designs for this equipment that anyone can reuse, replicate, build upon or sell so long as they attribute the developers on whose shoulders they stand. Hardware can also be expanded to encompass other non-digital input to research such as chemicals, cell lines and materials and a growing number of open science initiatives are actively sharing these with few or no restrictions on use.

A growing number of academics are developing and using open hardware for research and education in addition to sharing their papers, data and software through broader open research practices. This brought a large cohort to the Gathering for Open Science Hardware (GOSH) in Shenzhen China during October 2018, an four day event which convened over 110 of the most active users and developers of open science hardware from 34 countries and multiple backgrounds including academia, industry, community organising, NGOs, education, art and more. PLOS kindly supported an unconference session during GOSH 2018 where students and researchers shared the following opportunities and challenges to boosting open science hardware in an academic context and planned a course of action to forward the goal of the Global Open Science Hardware Roadmap to make open science hardware ubiquitous by 2025.

Opportunities for open science hardware in academia

Open science hardware has some important intrinsic benefits. Firstly, it can reduce the cost of research, democratising opportunity and enabling limited budgets to stretch further. Joshua Pearce of Michigan Tech University has calculated a return on investment of hundreds to thousands of percent for funders of open hardware through a drastic reduction in lab costs. Secondly, it reduces duplication of effort by building on the work of others and thirdly, it provides opportunities to customise hardware to suit your optimal experimental design, rather than designing your experiment to fit the limitations of available hardware. Moreover, sharing more details of experimental designs facilitates replicability in science. This is needed more than ever given current lack of trust towards science in some societal contexts and fears within several scientific communities of a “reproducibility crisis”.

Gaining additional credit, citations and collaborations are all significant potential opportunities for academics developing open science hardware and are necessary to incentivise those activities. However, cultural change is required within existing systems of academic publication and reward to realise the opportunities. Change is coming, for example the recently established Journal of Open Hardware and HardwareX encourage formal publication of research advances and designs that well documented and appropriately licensed, while the PLOS Open Source Toolkit channel highlights and rewards open hardware publications. We know that open approaches can reap rewards but there is room for further evidence in the hardware context. Open access publications and shared datasets can confer a citation advantage and many projects developing open research tools projects report high numbers of collaborations and significant funding that may not have been possible without their culture of sharing. The Structural Genomics Consortium is involved in publishing over two papers per week, partially a result of hundreds of collaborations through making data and tools freely available. Research funders can be responsive to openness as a strategy to maximise impact: UK-based research centre OpenPlant was awarded £12m to make open technologies for plant synthetic biology and two open source projects on diagnostics for infectious diseases were awarded >£1m from the UK’s Global Challenges Research Fund.

Educational use of open science hardware also reaps both tangible and intangible benefits for universities. It represents an opportunity to increase the quality of teaching and learning by providing access to instruments that would otherwise be too expensive in the numbers required for effective teaching. It also contributes to building critical thinking skills and breaking open the “black box” of laboratory equipment. There are many academics in the GOSH Community involving their students directly in developing open science hardware, such as air quality sensors at the University of Texas Austin or biological instrumentation through the Biomaker Challenge in Cambridge. Still others such as the Centro de Tecnologia Acadêmica at UFRGS in Brazil are using open hardware tools extensively in student lab practicals and research projects.

Challenges to address if open science hardware is to become ubiquitous

There are several barriers to wider adoption of open science hardware in academia. One stumbling block is institutional buy-in and support: in these times of limited funding, many universities have become conservative about approaches to intellectual property and patenting of inventions. Encouraging an open approach to maximising societal and scientific impacts through technology and knowledge transfer requires a compelling narrative. This includes reassurance that openness is contextual. In some cases the traditional route of IP protection and restrictive licensing may be optimal to achieve intended outcomes, in others it is not and  open approaches should be considered a strategic option. It is also important to emphasise that open does not equal non-commercial. Indeed there are many examples of entrepreneurial academics and companies spinning off to sell open hardware back into academia but also to industry, non-profits, educational institutions and directly to the public.

Funding for ongoing support and scaling of open science hardware efforts is a perennial and important topic of discussion at GOSH. In the case of open science hardware, private investors may not consider open designs as maximizing profit opportunities but they can still be profitable and generate significant social and scientific returns. A major task for the GOSH academic working group formed at the unconference session is therefore to compile justification for a diverse range of funders including private philanthropists, social impact investors and venture funds to support open science hardware and further the goal of making it ubiquitous and widely used by 2025.

The final topic of discussion during our session was creating awareness among the scientific community both online and offline at major scientific conferences. Offering community-level incentives, support and guidelines to document and share open science hardware is feasible and there is much low-hanging fruit. However, we have seen in other areas of open research that to obtain ubiquity these community efforts need to be backed by formal incentives and rewards. In other words, the value of open approaches has to be recognised in decisions around funding, promotions and hiring decisions.

Furthering open science hardware through community action

Four priority actions emerged which correspond closely to recommendations in the Global Open Science Hardware Roadmap: i) leverage the GOSH Community and network to produce guidance and case studies for universities, funders and other stakeholders; ii) put open science hardware on the agenda at large disciplinary conference; iii) raise awareness through mainstream academic channels; and iv) take the initiative within our own institutions to experiment with ideas and build local communities.

We invite anyone who are interested in open science hardware to join this work to ensure that more researchers, students and those outside of academia have access to vital enabling technologies for science. You can sign the GOSH manifesto, join the GOSH Forum to share your projects and contact for more information.

NOTE: The PLOS Open Source Toolkit collects papers from across publishers that describe software and hardware with research applications. The site is curated and managed by five active researchers, including the author of this blog post, Jenny Molloy. Meet all the editors here and here.  We’re on a mission to make exciting, cost-effective, and high-utility tools accessible to all researchers to eliminate barriers to scientific innovation and increase reproducibility. We post new content monthly. Subscribe for notifications. Currently featured: an open source K-mer based machine for subtyping HIV-1 genomes.


Many thanks to PLOS for their kind support enabling people in need of financial support to attend GOSH and to the participants in the unconference session: Juan Pedro Maestre (University of Texas, Austin), Pierre Padilla (UPCH), Andre Chagas (University of Sussex), Jenny Molloy (University of Cambridge), Moritz Riede (University of Oxford), Benjamin Pfaffhausen (Freie Universität Berlin), Marina de Freitas (CTA-UFRGS), Minerva Castellanos Morales (Scintia), Tobias Wenzel (EMBL), Anne-Pia Marty (University of Geneva), Alex Kutschera (Technical University of Munich), Eduardo Padilha (University of São Paulo).


GOSH 2018:

Other GOSH images and credits can be found here.

Illustrations from the GOSH Roadmap can be found here.

All Gathering for Open Science Hardware photos and roadmap images are in the public domain under a CCZero waiver and available on Flickr


(one frame, other options in the figures)

Image credit: Nuñez et al (2017), licensed under CC-BY 4.0.

Caption: Bacteria and cell-free protein expression systems generating fluorescent proteins and imaged using the FluoPi.

OpenFlexure Scope: Openflexurescope.jpg

Image credit: Dr Richard Bowman, University of Bath

Caption: Open source, 3D-printed microscope stage imaging onion cells on a Raspberry Pi camera. The stepper motors enable focusing and moving of the sample stage.

Public Lab:

Image Credit: Public Lab, licensed under CC-BY-SA 3.0

Caption: Members of Public Lab balloon mapping oil spills and water pollution with open source kits.


Image Credit: Sci-Bots Inc.

Caption: Open hardware digital microfluidics system made by Sci-Bots.


We’re gonna party like it’s 1923 -

January 14-18, 2019 is #CopyrightWeek, and today’s theme is Public Domain and Creativity, which aims to explore how copyright policy should encourage creativity, not hamper it. Excessive copyright terms inhibit our ability to comment, criticize, and rework our common culture.

On January 1, tens of thousands of books, films, visual art, sheet music, plays, and other works passed into the Public Domain in the United States for the first time in twenty years. It’s time to celebrate!

Join us for a grand reopening party in San Francisco on Friday, January 25 from 10AM-7PM.

Co-hosted with the Internet Archive, this celebration will feature keynote addresses by Lawrence Lessig and Cory Doctorow, lightning talks, demos, multimedia displays and more to mark the “re-opening” of the public domain in the United States. The event will take place at the Internet Archive.

In preparation for this event, we asked a few Creative Commons community members to provide reflections on some of their favorite works that have entered the Public Domain this year!

Shanna Hollich, Collections Management Librarian at Wilson College
The Prophet by Khalil Ghibran

I spent a lot of time in my late teens and early twenties feeling lost and alone and confused (as so many of us do). I often looked for solace in books, and it was at a used bookstore in Boston that I found The Prophet by Kahlil Gibran. I had never heard of it before, but I took it home on a whim and read it over and over and over again, finding human connection and comfort in the words of this long-dead Lebanese-American poet. I have treasured my dog-eared copy of that book for many years. Now that it’s in the public domain, I’m excited to have the opportunity to give the work new life and new meaning now that I can legally use, remix, and share these words that meant so much to me.

Eva Rogers, Development Manager at Creative Commons

Indestructible Object 1923, remade 1933, editioned replica 1965 Man Ray 1890-1976 Presented by the Friends of the Tate Gallery 2000

Man Ray’s Object to be Destroyed (1923) — later destroyed and remade as Lost Object (accidentally printed/labeled Last Object) and, even later, as Indestructible Object — combines a metronome, an object that marks consistent time for musicians, with a photograph of an eye affixed to its ticking arm. Man Ray would set the piece going when he painted — a rhythm, an audience; keeping time, keeping watch. What does it mean for an art object — a destroyed art object, at that — to enter the public domain? What, now, belongs to the public? Can I recreate it? What if I accidentally (intentionally?) recreate a later iteration? Shall we the public gleefully apply the name Object to Be Destroyed to any number of our creations, which we will subsequently destroy and remake? You may destroy the metronome and the staring eye, but you cannot stop the ticking arm of time. Welcome to the public domain, Object to be Destroyed!

Ramona Ostrowski, Producer, Howlround Theater Commons

By Source, Fair use,

I saw an amazing production of George Bernard Shaw’s Saint Joan by NYC-based theatre company Bedlam in Cambridge about four years ago. They had four actors portray all the roles, with one woman playing Joan and three men covering everyone else. Seeing the story this way emphasized Joan’s power but also her humanity, and made her the true emotional center around which the inventive production whirled. I’m so excited that this play has now entered the public domain, because it’s the story of a woman clinging to her convictions and speaking her truth boldly even as the male power structures vilify her for it. It doesn’t take more than a quick scroll through Twitter or glance at Fox News to see the contemporary resonances. I can’t wait to see how inventive artists legally remix and riff off of this text in the coming years.

Jennie Rose Halperin, Senior Communications Manager, Creative Commons

In college I used to go to a long-defunct bar in Greenwich Village for a variety show curated by the old-time musician Eli Smith. (Himself a great interpreter of works in the public domain.) One night, a mostly xylophone band called the “Xylopholks” took the stage and performed “Yes, We have no Bananas!” while attired in banana costumes. Needless to say, they brought the house down. Some day, I hope to recreate this special moment, but until then, I have to enjoy the fact that I can legally use, remix, and reproduce this fabulous tune.

The post We’re gonna party like it’s 1923 appeared first on Creative Commons.

Openness, Mapping, Democracy, and Reclaiming Narrative: Majd Al-shihabi in conversation -

Majd Al-shihabi, the inaugural Bassel Khartabil Free Culture Fellow, is a Palestinian-Syrian systems design engineer focusing on the role of technology in urban systems and policy design. He is passionate about development, access to knowledge, user centered design, and the internet, and experiments with implementing tools and infrastructures that catalyze social change. He studied engineering at the University of Waterloo, in Canada, and urban planning at the American University of Beirut, in Lebanon.

The following is a conversation between Christine Prefontaine and Majd Al-shihabi, reflecting on his work and experiences as a Bassel Khartabil Free Culture Fellow.

The Bassel Khartabil Free Culture Fellowship

The Bassel Khartabil Free Culture Fellowship awards $50,000 + support to an outstanding individual developing open culture in their communities. This unique and life-changing fellowship promotes the values important to Bassel’s work and life: open culture, radical sharing, free knowledge, remix, collaboration, courage, optimism, and humanity. The Fellowship supported Majd Al-shihabi, the inaugural recipient, on two projects: Building an open source platform for oral history archives, to be used by the Syrian Oral History Archive, and digitizing, releasing, and improving the accessibility of previously forgotten 1940s British Mandate-era public domain maps of Palestine. The common thread: Preserving memory based on openness and collaboration and advancing visions for re-building and moving Palestinian and Syrian societies towards an open, fair, and free future.

Evidence Majd’s Story

Can we start with a basic overview of your work and then maybe dig into the fellowship projects that you’re working on?
I’ve been loosely involved with the open community for a long time. When I was studying in Canada at the University of Waterloo, the pressure of school limited my involvement. As soon as I finished I was like, oh, finally I can do the things that I actually am interested in doing. So, slowly that’s how I got involved with the a few open communities locally.

Throughout my studies I’ve mostly worked as a developer so I’ve been using a lot of open source software. That helped me improve my understanding of the open source community. It is not just about the code of the open source software, but also about how community dynamics in its community work, who can contribute, who doesn’t, and so on. So that has been the formation that has guided my work so far.

When I moved to Lebanon, I moved specifically to work with a project called the Arab Digital Expression Foundation youth camp. ADEF is the organization. This camp is at 10-day camp in the summer for people 18 and up. We had some participants that were 19 and 63 years old. It’s about the intersection of art, technology and politics, especially in this region, especially about the production of knowledge and content in the Arabic language.

That was the entry point for me in the Arab open source community — because the camp was very explicit about using open technologies and using open approaches to knowledge production and media production. That was the first time when I was like, I’m producing something that can benefit my community in a very explicit way.

I curated that camp, and then I stayed here and I was like, we have a lot work on openness in this community in Lebanon so let’s start with this. So I worked on a few smaller projects related to mapping. We worked on the Beirut Evictions Monitor, where we ran workshops to map housing evictions in Beirut because it has been undergoing a lot of pressure on real estate and housing — to think about how to map it and publish what is appropriate of that data.

Working on those projects were the first steps. I started thinking about how to activate the community around the mapping and issues of mapping. Because, for example, there is no one authoritative map of Beirut that you can get, especially of buildings of Beirut. On OpenStreetMap there are areas where some active mapper lives, so you can find all of the buildings in their neighborhood. But they’re drawn from satellite imagery so they’re not very accurate. But there’s no comprehensive map of Beirut. I’m trying to think about how can we use OpenStreetMap to engage the community in mapping efforts to make sure that their communities are on the map — literally — and connecting that with other sources of data, so that organizations like like the Beirut Evictions Monitor can use it.

The next step was when my collaborator Ahmad Barclay found the historic maps of Palestine, from just before the ethnic cleansing of Palestine, in the archives. We were like, we can use those maps. They’re really precious. As Palestinians, most of us have not seen what our villages look like. Before I saw those maps, I only knew of one surviving photo of my village and now I have a more textured view. We got really excited by the potential of those maps and we said, what can we do with them? That’s where the Palestine Open Maps project started. Visualizing Palestine hosted a lab with Columbia Studio X in Amman where we developed a prototype, which we carried on to what you see now on the platform.

At the same time, I was also interested in oral history archives. One of my main collaborators and friends has worked on the Palestine Oral History Archive at the American University of Beirut. She has also been a consultant for a few oral history projects and did an assessment of the Syrian Oral History Archive. Often what happens with those archives is that everyone gets really excited about collecting and then they have like 500 hours of recordings and they don’t know what to do with them. So she did that assessment. Then she was like, you guys need to think more explicitly about what to do with this collection and how to archive it. She said, you should not use Omeka as your only solution. And think about a more refined way of addressing the special needs of an oral history archive.

Those two projects were in the background of my mind when the fellowship was announced and I was like, this sounds like the right place to get sustainability while I work on these projects that are really exciting to me. Also, those two projects are very closely linked and both have great potential. For example, if you think about Palestine Open Maps and the Palestinian Oral History Archives — specifically the use of them after they’ve been archived — can you use the maps as a way to spatially navigate an oral history archive? One of my plans is to make that link between the two. To me they’re related in the long term.

The Palestine Open Maps Project has five different archival map sets that show Palestine before it was ethnically cleansed. We’ve been trying to combine those with census data sets and the locality name data sets. And now you can view it on That was what I presented at MozFest. That’s the project that I’ve been working on for the first part of my fellowship. For the second part, I’ll focus more on building the oral history platform.

We ran a few design workshops with the Syrian Oral History Archive to extract a workflow from the practices of the archivists. The idea was to enhance the workflow and make it applicable to all oral history archivists, but at the same time to make it as tailored as possible to oral history archiving. It’s a delicate balance that we had to hit. Then I did a few sessions with Palestine Oral History Archive and also talked with the Knowledge Workshop, here in Beirut, as potential users of the platform. Now we have a community that’s excited about finally having a way to archive and publish their collections.

From the outcome of the workshops, I started to build a user interface with a company called Calibro that does user interface design. The interface addresses each one of the phases of the archival process. And that’s where I am right now. I just started experimenting with a little bit of code in the past few days, but the majority of the code will be written between now and the end of my fellowship.

Hearing about the mapping project, what stuck out for me was the ability to anchor a story physically in a place. It’s one thing to hear it, but it’s another thing to have the opportunity to go to a place — physically or digitally. It grounds the story. Literally. It’s profound.

My grandmother is still alive and she was born in Palestine. She was one of the people that was ethnically cleansed during the Nakba and she hasn’t been back since. She was 11 years old when it happened so she remembers what it was like. I grew up listening to her talking about our house in Palestine and I know that the village doesn’t exist anymore. It’s been completely destroyed and in its place there’s a forest, a South African memorial forest, a European pine forest. She can name a few places but because they don’t exist anymore you don’t know what those places are. Because she was only 11 years old she doesn’t have that grasp on geography.

But when I got the maps I looked them up. Last summer I was visiting my family — they live in Kitchener/Waterloo, close to Toronto. My grandma was there and I was asking her, Teta, can you describe your house to me again? So she started describing and she was like, oh, it’s on top of the hill called El Khirba. And I looked at the map and there it was: El Khirba. It was labeled on that map. Then she was like, if you look from our house qibli (in the direction of Mecca, south) you would see Esh Shajara, the other village, and sure enough it’s on the map. If you go there right now wouldn’t see it but on the map it’s right there. It’s directly south. She would describe all of those landmarks and those features and, sure enough, they’re on their map.

To me, it’s extremely profound. Finally I know what my grandma’s talking about. Even if I can’t access it today, at least there’s this physical remnant that has been left to us. It’s particularly interesting if you’re thinking about archives. I learned this because I’m more of a technical person so I’m not as well-versed in the terminology of the philosophy of archives. But my collaborator, Hana Sleima [see also: Constructing a Palestinian Oral History Archive], taught me this term: “reading against the archival grain”.

Those maps were made by colonizers. During the British Mandate, they went in and decided that now the land of Palestine is theirs and they are going to map it. They made highly detailed maps and now, as the victims of that colonization, we Palestinians can read those maps with a purpose that’s completely different from the purpose that they were intended for by the colonizers. We’re reading those maps in a way that is not in alignment with their original purpose.

This is common among people of the South when they’re reading their archives, especially in colonial archives. That’s one of the really powerful things that we’re enabling through this project: You can understand your own history and you can have a different understanding of your own history by taking a critical look at the archives.

Can you hone in on a moment where you felt a sense of success with this project?
Both projects are a work in progress, but the biggest sense of success that I have is when I demo those project to people. Especially the Palestine Open Maps project — when I demo it, especially people who are descendants of Palestinian refugees and I ask them, what’s the name of your grandparents village? All of them know the name but they don’t know where it is geographically. It could be in the north, it could be in the south, they don’t know. I take my phone and I show them, this is what it was like.

People are kind of shocked and taken aback. They spend a surprisingly long time just navigating their map, zooming into details like, you can see where the school was. I was looking for another map and it had a museum there. It’s a small village that has a museum — why? There are all of these nuances about our lives as Palestinians that have been systematically erased that we can actually extract again out of these maps and reconstruct.

One of my goals is to combine this archive of the maps with another archive — the Palestine Oral History Archive — because they have a collection of interviews with people who knew Palestine before the Nakba. Whenever a place is mentioned in those narratives and those interviews, can we have it pinpointed on the map? And then can you hear the story about that place?

People get so excited about this and that, in turn, excites me. I think that’s the biggest success of this project: Using the power of technology, turning this abstract concept of Palestine that we’ve been told about as children — this is your homeland and this is the place where you belong — turning that into something that’s really tangible.

As I’m listening to you, there was a connection that came up in my head. Dave Isay, the person who started StoryCorps has a TED Talk where he describes documenting people’s stories — and sharing them back. When he did that, one of the participants grabbed the printed story and started screaming, “I exist! I exist!” [See minute 2:10] For me, this connects to both the mapping and the oral history. When you do your demo and someone says, this was the name of the place. And you show them: Here it is! It’s not just in your head. The place exists and your story exists.
Totally. I hesitate to talk about this because it always brings weird critiques. But, the central premise of the creation of Israel is that this is a land without a people for people without a land. But if you look at those maps it just shows so clearly that there were people in that place.

If you positioned this project in an intellectual history of the Palestinian struggle, to me it’s a descendant of a project called The Atlas of Palestine by Salman Abu Sitta. The book is in two parts. One is the atlas which compiles the paper maps of Palestine and records about localities and census and so on. It’s a paper book that’s really thick and huge and heavy. And the other component of that book is called The Return Journey. It proposes that the land of the historic Mandate Palestine, between the Jordan River and the sea, can fit everyone. It can fit the four and a half million Jews who live in that land right now, and all of the Palestinian refugees that have been ethnically cleansed.

There’s no need for anyone to have to be forced to leave. We can all live there in a democratic state where everyone is an equal citizen. This project is a small step towards furthering that goal. All of us are equal human beings and we should have equal rights to live in the lands where we belong. [laughing] It gets really heavy whenever you’re talking about Palestine!

That is a beautiful and admirable sentiment. Thinking again about these projects, can you hone in on a moment where you faced a challenge or a struggle?
On the Palestine Open Maps project, even something as simple as getting those maps was a struggle. We kept reading references to those maps in various books about Palestine, but we never actually saw them. We’d see small scans of a single village but we’d never get the access to the whole map.

Then eventually, ironically enough, we found them in the Israeli National Library Archives. They’re all scanned at very high resolutions, which is perfect for us. But if you go to the website, the content management system that they use doesn’t give you the entire image. You can’t just download the entire image. If you right-click on it, it gives you a smaller section. Also: You can’t access .il Israeli domain names from Lebanon because the two countries are at war. So we have to circumvent that and use Dropbox to download all of the files, and write a script that takes every tile, stitches them together, and saves them back to Dropbox. It was a very elaborate process technically to circumvent all of those restrictions — whether they’re technical or political — and get those maps. So that’s one thing.

And then in the Arab world, there’s a lot of technical people, but the good geeks — the nerds that we rely on to build our tools — everyone just leaves and goes to Europe or North America. There is a huge brain drain and when you want to start building a platform like this one, especially when we first started — before I got the fellowship that’s helping me build the platform now — I didn’t have much time to develop it.

It was a very slow process of me wrangling in a few hours here and there to work on development. It was also trying to get people to learn front-end frameworks for Javascript so that we could build it in a modular way that doesn’t turn into spaghetti code and become completely unsustainable a year later. I had to convince people to learn Vue.js as a front-end development framework. We just don’t have enough technical people who could help us grow this project to its full potential.

The project has so much potential, and it gets people excited so much, but we don’t have the technical capacity to take it to the next step because it’s just me and a couple of other people very-part time right now. Everyone else who can help is in a different country and is unable to help on this.

Then, one of the scary things for me is that I don’t want this project to die after my fellowship is over. It’s always so difficult to fund anything that’s related to Palestine. In terms of sustainability and in terms of funding, I’m kind of scared of not being able to find funding for it over the long-term.

Also, if we talk about the Oral History Archives, there is the question of finding developers to help me out because I’m going to be doing it all by myself. Finding developers to help me out will be difficult. I found someone from Mozilla who was willing to do code reviews for me, which is awesome. But I envisioned this as an open source project that is sustainable over the long term.

The front-end framework that I’m using just announced plans to make another major release which breaks backwards compatibility in mid-2019, which is around the time that the platform is going to be released. That means that immediately — as soon as it’s released — the next version of the front-end framework that we’re using is going to be outdated, which breaks backward compatibility. So we’ll need to work on an update.

All of this means that the only way to make this project sustainable is to turn it into an open source project that has a lot of different institutions invested in it so that we can have a front-end developer and a back-end developer who can spend one or two days a week making sure that it’s running smoothly. A big challenge for me is figuring out how can we activate an open source community around this project — specifically in this region. We need to consolidate the power of the open community so that our projects become more sustainable over the long term, both technically and financially.

What kind of funder support would help you take your work to the next level?
We can split that for the two different projects. For the Palestine Open Maps Project, the project is not about the maps themselves, it’s about the story that they tell. How can we build storytelling tools based on those maps that reveal the nuance of Palestinian life over the long term? To do that, you need a team of three people: a user experience designer, a front-end developer, and maybe a researcher who could extract all of the narratives. At least three people, maybe more. So we would need funding for that team to sit down together and collaborate — let’s say for a year — and make this project reach its full potential.

The cool thing about this project is that it’s providing the raw data, a base that other projects can build on. In the current phase of the Palestine Open Maps Project, we’re vectorizing all of the map data. You can already download all of the data and it’s all licensed as CC0 — no rights reserved.

One of my major inspirations is the New York Public Library’s NYC Space/Time Directory, which digitized maps of New York that were made by fire insurance companies. One of my favorite geographers and cartographers, her name is Leah Meisterlin, has done amazing work on cross-referencing different data sets with the fire insurance maps data set. So, after it was vectorized by the Space/Time team, she overlaid that data with other data and she came up with this really nuanced vision of what New York looked like in the 1800s. Where the rich people lived, where the poor people lived, as well as the class distribution. It’s so fascinating!

This place where we walk right now, it used to be inhabited by people and this is what the character of this neighborhood looked like 100 years ago, 150 years ago. If I can do the same thing with Palestine Open Maps for Palestine, that would be an amazing thing for me.

One of the major goals for the oral history archival tool is that we wanted to point out all of the epistemological decisions and ontological decisions that an archivist has to make when they’re creating an archive. So something as basic as do you do transcription or do you do segmentation? It’s a big question mark because there are schools of people who are very strict adherents of one way or another of doing oral history archiving. There are advantages and disadvantages to both and there is no correct answer. Hana and I tried to incorporate those decisions in the platform.

How do segmentation and transcription differ, for those of us who are not familiar?
Transcription is when you take every single word and you write it down. With segmentation, the goal is to preserve the orality of an oral history testimony. So if you transcribe, you can read the text and you know the content, but you lost the orality — the tone, the nuance of the language, the intonation, and so on. But it’s really good for searching. You can just search for a keyword, then you find all its appearances in the text.

With segmentation you say, okay, from this second [timestamp] in the interview the person was introducing themselves and explaining where they’re from. And then from this second to this second they’re talking about the chemical attacks in Ghouta, for example. You have keywords and subject headings for each one of those segments. You don’t have a word-for-word transcript, but what you do have is an index of the content of that segment. You can still search it, but you are forced to listen to it so that you can get the texture of the sounds.

So segmentation is like a metadata approach?
Totally. I think that this platform is really useful in pointing out the decisions that an archivist has to make. We’re trying to create a guide that accompanies the use of the platform so it’s not just stand-alone software, it sits in the context of this debate in the oral history community.

How can it be sustainable on both fronts: in continuing the conversation on an intellectual level of how to archive an oral history collection, and how can we make sure that the actual code is sustainable?

Hana Sleiman is my collaborator on the MASRAD: Platform for the Syrian Oral History Archive project [website will be live soon!]. Ahmad Barclay and Hanan Yazigi are my main collaborators on the Palestine Open Maps project. With two other people we’re starting to think about how to create a collective that embraces those two projects plus our other projects around knowledge production and knowledge dissemination, especially in the Arabic language but generally around this region. I see MASRAD, which is the name of the collective that we’re trying to create, as a sustainable vehicle for the Syrian Oral History Archive project — but it’s not the ultimate answer.

Beyond the money, what would you say that the Bassel Khartabil Free Culture Fellowship contributed to your work?
The first thing that I should say is that I recognize the importance of our people to us. Bassel Khartabil, who my fellowship is named after, I was not around when he was around. I was studying in Canada. But I knew of him and everyone I’ve talked to in our community right now has had some interaction with him. His legacy is still there. If there’s one thing that this fellowship has given me, it’s access to that network of people who have similar beliefs, who have been touched by the same values that Bassel was striving towards. Access to all of the people in his community.

I’m afraid of idealizing him. Of course he was an amazing guy, but he’s not a perfect guy. He was a very active member in our community, and if you want to kill a movement, you kill its leaders. That’s what happened to the Palestinian movement in the eighties — there was a series of assassinations of Palestinian leaders all over Europe by the Mossad. Car bombs and poisonings and so on. This is what happened when we lost someone like Bassel.

What this fellowship has given me is access to that network and a chance to connect people and disparate projects together with the weight of the three big organizations that are sponsoring this fellowship: MozillaCreative Commons, and WikiMedia.

When I went to MozFest I was meeting my people! Especially Jon Phillips and Mahmoud Wardeh— he’s “@lurnid” on the internet. We had this really beautiful moment of connecting over Palestinian-ness and our desire to push for openness and for that connection in our community.

It’s those beautiful human interactions that the community has given us.

Right. It’s like, these are my people. It comes back to “I exist, we exist.”
Yeah, that’s so true.

The movement you were talking about, how would you describe it? What is that legacy?
It has a couple of aspects to it. One is the bigger umbrella that is the struggle for democracy in our region. In 2011 we were so hopeful. I can’t even tell you the level of hopefulness that was engulfing the entire region. I was living in Italy when Mubarak stepped down and I could feel it from there. Then that quickly collapsed over the next few years.

But we still believe, regardless, that the tool to accomplish our goal, which is having democratic representation of ourselves, is openness, with all of its permutations. Whether we’re talking about open source, having access to the inner workings of the tools that we are using, or whether we’re talking about open institutions, having access to archives of the state and having access to data that’s being produced by the state.

There’s an idiom in English: sunlight is the best disinfectant. So the more open that we are, the more capable we are at disinfecting our region from the corruption that is very deeply situated in it.

That’s the legacy: How can we use the tools of openness to extend our goals of democracy and participation and representation?

Those are all the questions I have for you, but is there anything more you want to tell me?
This is a lot more emotional than I thought it was going to be!

I feel you on the emotional bit! These are challenging and profound issues that go beyond one people. Your vision for what you want to achieve and your values touch everybody. What you’re discussing is very profound for everyone. As you were talking about how there’s space for everyone, I had a vision of being able to use those stories that are grounded in specific places to enter into dialogue with the people who now live there. Sharing stories can be the beginning of a truth and reconciliation process. They help people to listen to each other, make space for each other, and go forward.
One of the things that happened at MozFest, I was doing the demo at the science fair and three things stood out for me. One of the things that’s really cool at MozFest is that there was a lot of ethnic diversity. It was just not Europeans and North Americans. It was everyone and that was really cool.

Among those people who came were lot of South Asians. Personally, I feel a lot of solidarity with South Asian people because we’ve both been colonized by the United Kingdom. One of the lines that I had in my demo is, the British loved making maps. And there’s always this mutual look of recognition whenever there is a South Asian person in the crowd that I’m demo-ing to. They smile and I can catch it and there’s this moment of solidarity between us. There’s mutual understanding even if we don’t have to explicitly say it.

Only two people had a negative reaction to the project and one was this young woman. I’m happy for people to ask questions and learn from my experience, but she was asking them in a very aggressive way. Questions like, “Is it normal for people to be ethnically cleansed during a war?” It’s not normal. Even if it was normal, it shouldn’t happen! Then she was asking me all of these basic questions that showed she didn’t actually know anything about the conflict. And as she was asking me all of these questions there was this other guy who immediately identified himself as an Israeli and he kept saying things like that were denying my Palestinian-ness. Like, “Why do you call yourself a Palestinian refugee? You don’t count as a Palestinian refugee.” I said, “I have the goal of keeping your right of return to what you consider your Jewish homeland. I want to keep that. In return, give me my right of return as well. Your right of return is 2000 years old. My right to return is 70 years old. This desire to return is a mutual feeling between us and you should be as understanding of it and of me as I am of you.” That was what I was trying to convey to him, but he was very rooted in his denial of this.

I wonder if this project, if it combines with oral history, if it combines with other programs that add nuance and texture to Palestinian life, can stand in opposition to narratives that just say “Palestinians want to kill us and throw us in the sea.” If we can use all of these tools to enhance the image that Israeli Jews have of of Palestinians then maybe we can reach a solution before it’s too late.

The first step of hate speech is to dehumanize and I see your work as infusing that humanity back in by replacing the texture — by replacing the depth from mundane observations like, “From my village here, I could see that village there.” It’s memory, it’s not political. And it creates the opportunity for a bridge. Thank you.
Thank you for being a great listener.

 Photo of Majd above copyright by Cynthia Kreichati, used under a Creative Commons Attribution license.

The post Openness, Mapping, Democracy, and Reclaiming Narrative: Majd Al-shihabi in conversation appeared first on Creative Commons.

Building CC’s Network at scale for a new era of growth and opportunity -

Creative Commons Global Network Strategy by Giulia Forsythe. CC0 Source: Flickr

How can we build a Global Network at scale, empower members and communities to lead, and drive a new era of growth and opportunity for Creative Commons and its community? CC has been engaged with this question over the past few years as we rebuild our Global Network to work better together. Today, we’re celebrating 306 individual members and 42 institutional members! Membership is distributed across 68 countries, and 31 chapters – a truly global movement for the Commons.

Structured membership has been the key to the network’s growth. With a network site and robust vouching system, our members are self-organizing in platforms, committees, and chapters with clear, inclusive pathways for contribution.

In November, the network’s governing council met to solidify and approve network activity, which means that now is a great time to join the platform of your choice. The list of platforms, working documents, and invitation to get involved is below:

  1. Open Education Platform
    • Platform working document
    • 845 members from 55+ countries
    • platform vision, mission, scope, goals, principles approved
    • Working on process to propose, select, fund and launch international open education activities / projects
  2. Copyright Reform Platform
    • Platform working document
    • 150 members
    • Platform rationale, goals and objectives, areas of engagement approved
    • Working on drafting collaborative projects
  3. Community Development Platform
  4. Open GLAM Platform
  5. Culture Platform

Whether you’re a CC Newbie or a seasoned Commoner, you’re invited to join the platform of your choice to connect, build, and grow. Want to jumpstart your involvement? Register for the CC Summit today and meet community members from around the world.

Other ways to get involved:

The post Building CC’s Network at scale for a new era of growth and opportunity appeared first on Creative Commons.

PLOS Board Appointments

Plos -


After a careful search and much consideration, we are excited to share with our community five new appointments we’ve made to the PLOS Board. This is a pivotal time for PLOS, and as you’ll see, each member will bring us a different perspective, which will enable us to expand the ways in which we serve our scientific communities.

Our new Board Chair is Alastair Adam, currently CEO of innovative digital textbook publisher, FlatWorld, who brings to the role not only a strong understanding of publishing – including scientific journals – but also his business savvy and strategic skills. Alastair joined the Board effective November 1 and assumed the Chair role on January 1, 2019, replacing our longtime Board Chair, Gary Ward (more on Gary a little later).

We also added Dr. Simine Vazire, who is currently a Professor in the Department of Psychology at UC Davis where her research focuses on one of the oldest and most fundamental questions in psychology: how do we know ourselves? In 2017, she was awarded a Leamer-Rosenthal Prize for Open Social Science in recognition of her efforts to advance reproducibility, openness and credibility in the social sciences. She held a previous role as a senior editor of Collabra: Psychology and Editor-in-Chief of Social Psychological and Personality Science. Her scientific and editorial expertise bring a well-rounded and diverse perspective to our Board, and will help to ensure that working scientists retain a strong voice on our Board.

Dr. Victoria Coleman joined the Board in May 2018. She is currently the Chief Technology Officer at the Wikimedia Foundation where she sets the organization’s technical roadmap for the evolution, development, and delivery of core platforms and architecture.  Victoria brings valuable technology experience to the Board at a time when PLOS, like many mid-size publishers, faces important and difficult choices about its technology infrastructure. Victoria serves in several advisory roles including the Board of the Santa Clara University Department of Computer Engineering and as Senior Advisor to the Director of the University of California Berkeley’s Center for Information Technology Research in the Interest of Society.

We also wanted to ensure that we maintain deep experience in PLOS’ core biomedical science fields, and we are very lucky to have Professor Keith Yamamoto of UCSF agree to join us (effective February 1, 2019). Keith is both a highly regarded scientist running his own research lab and has extraordinary experience in the policy arena focusing much of his career on science practice, education, communication, and advocacy including strong and early support for OA. He currently serves as UCSF’s first vice chancellor for Science Policy and Strategy.

Last but by no means least, Suresh Bhat joined us on November 1, 2018 as incoming Chair of the Finance Committee. Suresh brings to PLOS not only deep financial knowledge but also experience at a top research university and a passion for education. Suresh has headed finance programs for a number of financial institutions. He is currently CFO and Treasurer at the Hewlett Foundation, prior to that, he was CFO at the Haas School of Business at UC Berkeley (and is a Haas and Cal alum).

I would be remiss if I did not take the opportunity here to express my heartfelt thanks to both Gary Ward, our now former Board Chair, and Mike Eisen, one of the co-founders of PLOS, both of whom left the Board in 2018.  In his seven years as Board Chair, Gary has led the Board with passion, wisdom and integrity, and has been both counsel and friend to many of us in the organization. Mike is of course irreplaceable in every way. His vision, zeal and dedication are a big reason that PLOS not only exists but has had such a deep impact on scientific communication. I have no doubt that Mike will continue to be one of PLOS’ greatest advocates (and yes, let us know when we get it wrong – as good friends do!).

While goodbyes are never easy, we are excited to embark on this new chapter for PLOS with the fresh wisdom of so many exceptional, dedicated individuals. Please join us in welcoming our new Board members!


Abonner på nyhetsinnsamler - Internasjonale nyheter