Internet Censorship Editathon in San Francisco

Creativecommons.org -

Join us in San Francisco on December 14 for a Wikipedia Editathon on interent censorship. If you’ve never edited a Wikipedia article, don’t worry! There will be experts there to help you through the process.

From the announcement:

Join volunteers from the Electronic Frontier Foundation, Creative Commons, and the Bay Area Wikipedia community to write and edit about human rights and free speech online. We will improve, create, and update Wikipedia articles related to global internet censorship. People regularly turn to Wikipedia to get a basic overview of internet censorship, so it’s crucial that we ensure Wikipedia’s coverage is up-to-date and accurate. Internet censorship means that users across the world aren’t always using the same Internet, cannot access the same websites, or can’t contribute to or read the same Wikipedia articles. Speech-chilling government surveillance, blocking, and filtering are all methods of censorship, and they are globally ubiquitous. Internet censorship impacts users everywhere, because fewer people are able to upload or contribute to the Internet or access information online.

In addition to improving articles on Internet Censorship as a broad topic, we will focus on improving and updating key articles about internet censorship for individual countries, and if possible, ensure the content is also available in their local language.

Please join us in person or online to help improve the public conversation on Internet Censorship. All levels of Wikipedia-editing experience are welcome!


Creativecommons.org -


December marks the time of year when many of us start thinking about making year-end gifts to our favorite charities, and #GivingTuesday has become one of the most popular days for donating.

As you’re thinking about which organizations you’ll support this year, we hope you’ll think about how Creative Commons affects your life (and the lives of millions around the world).

Our core values are rooted in helping people to share their ideas, art, research, and culture with the rest of the world. That sharing can really add up too. Our recent State of the Commons report (translated by volunteers into 12 languages and counting) found that textbooks shared with open licenses have saved students more than $100 million.

CC is a small organization, but we still need resources to educate policy makers, support online sharing platforms, promote the benefits of open licenses, and grow our community. So, if you are in a giving mood on #GivingTuesday, consider a gift that supports #SharingEveryday!

With thanks,

Banner image: gift box icon by Pham Thi Dieu Linh licensed under a CC BY 3.0 license / snowflake icon by Paulo Volkova placed in the public domain


German appellate court upholds common-sense attribution

Creativecommons.org -

All six Creative Commons licenses require licensees to attribute the original creator. Although we provide guidelines for attributing a work, we also recognize that standards for how and where licensees should provide attribution vary a lot from medium to medium. That’s why CC licenses allow licensees to fulfill the attribution requirement “…In any reasonable manner based on the medium, means, and context in which You Share the Licensed Material.”

A recent court case in Germany has raised questions among some CC license users about what qualifies as reasonable attribution. Must websites that use openly licensed images make the attribution information visible at all times, even in a gallery of image thumbnails? And what about when a visitor accesses an image directly, via the “View Image” feature in her web browser? Must attribution information be visible then too?

Fortunately, we believe that common sense has won out in a recent appellate ruling.

Does “View Image” violate CC licenses? / CC BY-SA
Source photo: Bethlehem College Preso / Locus Research / CC BY-SA (context)

The dispute, which first went to court in February, involves the terms of use of a stock photo site. Although the case did not directly involve Creative Commons licenses, the licensing terms in question were quite similar to the wording of CC licenses’ attribution requirement. Like CC licenses, they required attribution appropriate to the medium in which the photos are used.

The defendant had diligently attributed the rightsholder on the page where they used the picture, but the website also had a dynamically generated “overview” gallery showing preview thumbnails of pictures and the site didn’t restrict users from downloading the images via “View Image.” When a visitor viewed an image in these two ways, attribution information was not visible.

The trial court ruled that the preview thumbnails (which did not include attribution information) were acceptable as they were under the two rulings of the German Federal Supreme Court on preview pictures. Regarding the direct viewing via “View Image”, the court ruled that this was not covered by the thumbnail rulings, and interpreted the terms of service of the stock photo site to require attribution no matter how the picture is viewed. The judges said that the name of the rightsholder would in case of “View Image” need to either be integrated into the picture itself (i.e. as an additional part of the graphic) or be part of the URL of the picture.

The stock photo provider, which was not a party to the case, provided a statement on behalf of the defendant, saying that their terms of service were not intended to require that the name of the author (also) be part of the URL. Nonetheless, the court ruled otherwise. The main argument advanced was that “appropriate to the medium” only applied to how attribution was to be given, not to whether it would be given, and as the picture could be viewed separately, attribution was also required in that view no matter how complicated its implementation would be.

After the decision became public, a debate started amongst bloggers and others who regularly use CC-licensed pictures, many of whom worried whether the court’s strict interpretation of the attribution requirement would also be relevant to how the BY condition of CC licenses is interpreted, at least under German copyright law. It was obvious that almost none of the CC-licensed pictures used on the net are attributed in the graphic itself or in the URL, and that it would be virtually impossible to move attribution to such a standard across the net.

On appeal, the higher district court in Cologne indicated in an oral hearing in August that they did not intend to follow the original decision. Firstly, in their view, the terms of service are very strictly against editing/adaptations of the pictures taken from the site, which speaks against the obligation (or even the right of users) to insert attribution information into the graphics. This would constitute an impermissible edit. Secondly, they interpret “appropriate to the medium” to not only cover how attribution is given, but also to cover whether/when attribution is necessary. The court regards the “View Image” function as a mere technological side-feature of how the web works and not a separate type of use that requires separate approval by the rightsholder. The latter in effect means that this function doesn’t trigger separate obligations on the user’s part, beyond the ones triggered by the use of the picture in the regular browser view. The plaintiffs subsequently took back their claims. Users of pictures that are available under standard terms can relax again to some extent regarding the practicalities of attribution.

While it’s limited to Germany in its legal applicability, this ruling demonstrates how flexible attribution requirements can be well understood by all parties and adapt well to changing technology. The ways that we share content on the web are changing all the time, but if you approach CC licenses with a reasonable, logical approach to attribution, misunderstandings will be few and far between.

Read more: Stellungnahme zu möglichen Auswirkungen der Pixelio-Entscheidung (auf Deutsch)

CC goes to #Mozfest 2014

Creativecommons.org -

Creative Commons staff, affiliates, and supporters were active participants and contributors at this year’s Mozilla Festival, which has become an annual rallying point for the Open Web and our shared values. Our sessions covered a wide range of issues, from new technology, to open education and science, to working as an open organization. Thanks to Mozilla for inviting us. We’re already looking forward to next year’s event.

Christos Bacharakis / CC BY-NC-SA

CC makes tools for makers

by Matt Lee and Ryan Merkley

In CC makes tools for makers, CC’s Ryan Merkley and Matt Lee joined Mozilla dev Ali Al Dallal to talk about tools and technology solutions that could enhance the reach and value of CC-licensed works. CC shared some early screens for The List, a new mobile app that allows anyone to create and share a list of wanted images, and allows users to respond by taking pictures and sharing them in a global archive, all licensed CC BY. CC also shared CC Search, which will aggregate results from publicly-facing search APIs of openly licensed works. Ali demoed a prototype of MakeDrive, which will allow a user to search for a CC image, then grab it into their own local synced storage.

Participants broke into smaller groups to discuss challenges and opportunities, and identified solutions that were shared back with the group. Issues ranged from UX and usability needs to opportunities for monetization. Everyone was encouraged to join The List mailing list at creativecommons.org/thelist for updates, and to head to hackspace.cc to join the development process and contribute.

Portrait of a Creative Commons Artist

by Jane Park


In Portrait of a Creative Commons Artist, a group of musicians, filmmakers, museum curators, and arts education practitioners gathered to discuss the kinds of art being created in today’s digital landscape and how and why they share their artworks and the artworks of others. Surprisingly, or unsurprisingly, the artists’ motivations for sharing included no commercial goals. Motivations cited included wider distribution; to grow a community of like-minded artists; to elicit feedback or emotion; and result in new inferences and ways of thinking.

We also identified barriers to sharing in certain environments, such as child privacy in arts education and the time-consuming effort involved in cataloging artworks for museums. We addressed individual artists’ hang-ups to sharing, such as fear of plagiarism and not being quite ready or confident in the quality of one’s art to open it up for public criticism. Lastly, we brainstormed potential solutions to overcoming these barriers and help artists feel more comfortable with sharing their works online under more liberal re-use terms, such as Creative Commons licenses. Such solutions included: a tool that could display a canonical representation of your work, including all derivatives made from the original; a better attribution prompt enabling artists to specify exactly how they want to be attributed; and a registry of artworks in the commons. Additional needs included improved interaction design with artworks online, consulting or advisement on how to share such networked art, and simplified best practices around sharing and attributing open artworks. Full agenda and notes from the session are available, in addition to Kevin’s coverage of the session in The Open Standard, “The Plight of the Open-Source Artist” — which is aptly licensed under CC BY-SA.

This session affirmed and informed our intentions with several CC projects in development, such as a registry of CC-licensed works, a smart phone application that would make it easier for photo contributions to the commons (The List), and the Free Culture Trust, a coalition of organizations that would offer comprehensive services to artists wanting to donate their art to the commons.

Mapping #SchoolofOpen and #TeachtheWeb to places

by Jane Park

In Mapping #SchoolofOpen and #TeachtheWeb to places, community members from Creative Commons, School of Open, and Mozilla Webmaker came together to physically map their open web education programs, such as Maker Party and the recent School of Open Africa launch. We “hacked” a map of the world by creating our own version of it, and most interestingly, Africa was front and center with the U.S. largely as an afterthought. After mapping, we self-organized into two streams: those leading open web education for adults and those leading open web education for kids and teens. After much discussion, we are now planning to better bridge our communities to increase our impact in several regions, including Africa, India, and the U.S. We will be creating a digital version of our Hack the Map activity, allowing others to add themselves virtually over time, and also planning a joint School of Open and Mozilla Webmaker event with our communities for 2015.

OpenMe – Kids can Open

by Jane Park

In OpenMe – Kids can Open, a few of us from the CC, School of Open and National Writing Project communities gathered to discuss current efforts around CC and open web education for kids and strategies for replicating those efforts in other jurisdictions. Kelsey Wiens, CC South Africa public lead and School of Open program lead for CC4Kids, shared her experience with piloting CC4Kids in schools. Generally, starting with private schools resulted in more favorable results, in addition to partnering with existing organizations with strong ties to schools, such as Innovate South Africa’s Code4ct. We are now in conversation to pilot the CC4Kids model in the U.S. with the National Writing Project’s Educator/Innovator network. To start, we will be hosting a webinar as well as sharing a call to the network for after school pilot participants.

Walking the talk – How to work open

by Jane Park

In Walking the talk – How to work open, CC facilitated the strand on Partnerships and collaboration, or how to better work together as open organizations with overlapping missions and projects. How do we not reinvent the wheel and collectively have greater impact? Part of the solution lies in better communications and transparent organizational practices, but how do we translate these needs into an action item? We brainstormed several “best case scenarios” and in the end came up with a strong list of concrete solutions, with an Annual Capacity Building Conference for open organizations at the top of the list. Such a conference would focus specifically on knowledge sharing for the purpose of building capacity within and outside of our organizations to achieve our missions and realizing our vision for universal access to research and education and full participation in culture. Other ideas included:

  • A Natural Language Processing tool that links cross-organizational communications in different languages in one hub
  • Culture training for organizations that encourages failure and knowledge sharing, versus an environment where keeping information secret results in a competitive edge
  • Working groups of ambassadors in each city to represent all open organizations in that city (and that would work to bring in new organizations seeking representation)
  • A Task Rabbit-like platform for open organizations that would match organizations needing capacity in a certain area with an organization that could provide it

Complete notes from the session are available, in addition to results from the Community Building track of which this session was a part. The wranglers for the track are now working on a community building toolkit and will be rallying all organizational representatives in the next few months to make one of the above ideas into a reality. We vote for the Annual Capacity Building Conference of open orgs!

Skills Mapping for Open Science

by Billy Meinke

Billy Meinke / CC BY

In the Skills and Curriculum Mapping for Open Science session, facilitators and participants on Mozilla Science Lab’s “Science on the Web” track came together to build a map linking together the many nouns and verbs that describe interactions between people and scientific research, all of which are connected the Commons. An underlying focus of the session was to identify the ways scientists and citizens interact with outputs of research including content, data and code.

Taking a simplified approach to mapping these nodes will lend to the ability of others to expand on the map, and to translate the nodes into learning objectives that can be included in education and training programs around open and reproducible science. Over the two days of the festival, we facilitated the mapping of outputs and interaction types, aiming to capture key statements that describe the way scientific artifacts are created, reused/remixed, and shared. We welcomed scientists and non-scientists alike to stop by and critique the map as it was constructed, and to add nodes or connections where they felt something was missing. Did you ever once produce a dataset for your research blog? Then you’ve created data! Have you ever downloaded an Open Access research paper? If you have, then you’ve reused content! Have you ever uploaded a script to Github? Then you’ve shared code! It’s easy to drop most interactions people have with science into these buckets once we take a step back, and simplify the statements around what we do with scientific content and code in the Commons.

To allow others to build on the skills mapping done at Mozfest this year, a digital version of the map has been uploaded to Github , and is open for anyone to revise, tweak, and add to as they wish. Plans to expand this work include a full build out of high-level learning objectives, and alignment to existing Open Educational Resources in science training programs. A number of universities have expressed interest in piloting an undergraduate or graduate-level course on open and reproducible science, and the idea is that this map will be useful when developing such a course, revealing how and where skills learned in such a course apply to the way we work with content and code in the Commons.

K-12 OER Collaborative launches RFP for math and English

Creativecommons.org -

Math, Math, Math, math, mathh….maaah….. / Aaron Escobar / CC BY

The newly founded K-12 OER Collaborative has released an RFP for the creation of open educational resources (OER) in mathematics and English language arts and literacy. As all content developed under this RFP will be openly licensed under CC BY 4.0, U.S. states, territories and school districts (and anyone else in the world) may freely reuse, revise, remix, redistribute and retain these educational resources.

Forty-three US States + Washington DC + Guam + American Samoan Islands + US Virgin Islands + Northern Mariana Islands (map) have adopted the Common Core State Standards (CCSS)… and they all need current, high quality, affordable, CCSS-aligned educational resources for their students, teachers, parents and districts.

Will these US States and territories have the public funds necessary to update educational resources (including textbooks) for these two subjects?

According to the Association of American Publishers school districts across the U.S. spend over $8 billion on instructional materials every year. Textbooks quickly fall into disrepair, students are not allowed to write in or keep their books as they graduate each grade, and teachers are not legally and technically empowered to update outdated educational resources. In addition, much of this spending is on costly, yearly subscription fees for digital content which school districts merely lease (not own).

This aggregate demand represented by the nationwide need for new CCSS-aligned educational materials creates a unique opportunity for states to acquire higher quality, more effective content in a smarter, far less expensive, and far more flexible manner, and make these resources available to teachers, parents and districts. Specifically, states and districts can transition from expensive and rigidly controlled materials to OER.

The RFP specifically seeks complete courses for the following grades and subjects:

  • K–2 English Language Arts/Literacy
  • 3–5 English Language Arts/Literacy
  • 6–8 English Language Arts/Literacy
  • 9–12 English Language Arts/Literacy
  • K–5 Mathematics
  • 6–8 Mathematics
  • 9–12 Mathematics — Integrated/International Pathway (Secondary Mathematics I, II, III)
  • 9–12 Mathematics — Traditional Pathway (Algebra 1, Geometry, Algebra 2)

Courses will be designed to meet Common Core State Standards, accessibility standards, technical specifications, and an open licensing requirement of CC BY 4.0 on all new content produced. For details on the development process, see the complete RFP.

An informational webinar will take place next week on December 3, 2014 at 10:00 AM PST for those interested. RSVP at http://k12oercollaborative.org/rfp/webinar/.

The deadline for an initial Letter of Intent is January 9, 2015 by 5:00 PM PST.

About the K-12 OER Collaborative

The K-12 OER Collaborative is a coalition of eleven U.S. states and eight organizations, including Creative Commons. Together we are working to make quality K-12 educational resources aligned to state standards and accessible under the most open Creative Commons license, CC BY, so that we can drive down the cost of K-12 education for everyone. Learn more about the collaborative at http://k12oercollaborative.org.

The best community we could ask for

Creativecommons.org -

The best community we could ask for

November 25, 2013, was a day we had looked forward to for years — the official launch date of Version 4.0 of the Creative Commons licenses. But despite months of planning, something unexpected started to happen just after we hit publish: our website started to fail.

We spent the next 12 hours working to fix the current setup while simultaneously moving our website to higher-performance servers. That situation was maddening: for a few hours, half of the world could see the new 4.0 licenses, and half couldn’t. Finding a fix was our highest priority. All hands were on deck to ensure we delivered on our promise of providing stable, trustworthy infrastructure for our licenses.

And deliver we did. By the morning of the 26th, the entire world awoke to a new set of CC licenses — licenses that reflect two years of work by some of the best minds in copyright law on the planet.

I’m telling you about the site outage for two reasons. First, it shows us for what we are: a very small organization with extremely limited resources. CC licenses will always be free, but maintaining them isn’t. Whether it’s tech infrastructure, adoption support, or helping users understand the licenses, our stewardship responsibilities are ongoing, in demand, and require resources.

Second, and more importantly, it says a lot about you. A lot of you were up all night with us. The people who could see the new licenses were excitedly sharing details with those of you who couldn’t, and asking us how they could help. I remember laughing to myself, “How many site outages get live-blogged?” Basically, you’re the best community we could ask for.

If you can, please consider making a gift to help carry Creative Commons into 2015. Together, we built state-of-the-art licenses that we’ll all be using for the next decade. But there’s a lot more work to do, for all of us.

Thank you for sharing with us in this dream of a world where knowledge and culture are more accessible to everyone. We’ll never stop fighting for that world, even if it means pulling a few all-nighters.

Bill & Melinda Gates Foundation to require CC BY for all grant-funded research

Creativecommons.org -

Philanthropic foundations fund the creation of scholarly research, education and training materials, and rich data with the public good in mind. Creative Commons has long advocated for foundations to add open license requirements to their grants. Releasing grant-funded content under permissive open licenses means that materials may be more easily shared and re-used by the public, and combined with other resources that are also published under open licenses.

Yesterday the Bill & Melinda Gates Foundation announced it is adopting an open access policy for grant-funded research. The policy “enables the unrestricted access and reuse of all peer-reviewed published research funded, in whole or in part, by the foundation, including any underlying data sets.” Grant funded research and data must be published under the Creative Commons Attribution 4.0 license (CC BY). The policy applies to all foundation program areas and takes effect January 1, 2015.

Here are more details from the Foundation’s Open Access Policy:

  1. Publications Are Discoverable and Accessible Online. Publications will be deposited in a specified repository(s) with proper tagging of metadata.
  2. Publication Will Be On “Open Access” Terms. All publications shall be published under the Creative Commons Attribution 4.0 Generic License (CC BY 4.0) or an equivalent license. This will permit all users of the publication to copy and redistribute the material in any medium or format and transform and build upon the material, including for any purpose (including commercial) without further permission or fees being required.
  3. Foundation Will Pay Necessary Fees. The foundation would pay reasonable fees required by a publisher to effect publication on these terms.
  4. Publications Will Be Accessible and Open Immediately. All publications shall be available immediately upon their publication, without any embargo period. An embargo period is the period during which the publisher will require a subscription or the payment of a fee to gain access to the publication. We are, however, providing a transition period of up to two years from the effective date of the policy (or until January 1, 2017). During the transition period, the foundation will allow publications in journals that provide up to a 12-month embargo period.
  5. Data Underlying Published Research Results Will Be Accessible and Open Immediately. The foundation will require that data underlying the published research results be immediately accessible and open. This too is subject to the transition period and a 12-month embargo may be applied.

Trevor Mundel, President of Global Health at the foundation, said that Gates “put[s] a high priority not only on the research necessary to deliver the next important drug or vaccine, but also on the collection and sharing of data so other scientists and health experts can benefit from this knowledge.”

Congratulations to the Bill & Melinda Gates Foundation on adopting a default open licensing policy for its grant-funded research. This terrific announcement follows a similar move by the William and Flora Hewlett Foundation, who recently extended their CC BY licensing policy from the Open Educational Resources grants to now apply foundation-wide for all project-based grant funds.

Regarding deposit and sharing of data, the Gates Foundation might consider permitting grantees to utilize the CC0 Public Domain Dedication, which allows authors to dedicate data to the public domain by waiving all rights to the data worldwide under copyright law. CC0 is widely used to provide barrier-free re-use to data.

We’ve updated the information we’ve been tracking on foundation intellectual property policies to reflect the new agreement from Gates, and continue to urge other philanthropic foundations to adopt open policies for grant-funded research and projects.

Den digitale fælleds tilstand – ny rapport fra Creative Commons

CC Danmark -

Creative Commons har netop udgivet en rapport som præsenterer en tilstandsrapport for åbent indhold på Internettet – og det er positiv læsning, for tallene sprænger alle rammer. Således er der nu 882 millioner værker under CC-licens online, og det er derudover glædeligt også at se, at skaberne der bruger CC-licenser i stigende grad hælder imod at bruge de allermest åbne licenser CC-BY og CC-BY-SA, de såkaldte “free culture”-licenser.

Således viser statistikken at hele 58% af værkerne tillader kommerciel udnyttelse, og 76% videre bearbejdning. Det er en kraftig stigning fra 2010 og indikerer hvorledes fri deling som værktøj for kunstnere og indholdsskabere vinder frem med stormskridt. Rapporten viser dog også at det er primært i Nordamerika og Europa at CC-licenserne for alvor har fået fat. Afrika, Mellemøsten, Syd- og Mellemamerika – og til dels Asien – stadigvæk halter lidt efter, hvilket naturligvis også vidner om en helt anden ophavsrettighedsmæssig kontekst. Til gengæld understøtter de mest populære værktøjer som bruges i hele verden i højere og højere grad brugen af CC-licenser. Hele 9 millioner websites bruger CC-licenser, herunder mastodonter som Youtube, Flickr og Wikipedia.

Sidst, men ikke mindst, er der på uddannelsesfronten er der også spændende perspektiver præsenteret. 14 lande har allerede implementeret lovgivning der sikrer vækst indenfor udviklingen og anvendelsen af åbent licenserede undervisningsmaterialer – og flere er på vej! Bliver spændende at se de kommende års rapporter, hvor vi forudser enormt vækst på dette område.

Du kan se hele rapporten og den medfølgende infografiske præsentation her – og i Creative Commons Danmark har vi været behjælpelige med en dansk oversættelse, hvis du føler trang til at dele i dit netværk.



The post Den digitale fælleds tilstand – ny rapport fra Creative Commons appeared first on Creative Commons Danmark.

State of the Commons

Creativecommons.org -

Today, we’re releasing a new report that we think you will want to see. State of the Commons covers the impact and success of free and open content worldwide, and it contains the most revealing account we’ve ever published, including new data on what’s shared with a CC license.

We found nearly 900 million Creative Commons-licensed works, dramatically up from our last report of 400 million in 2010. Creators are now choosing less restrictive CC licenses more than ever before — over half allow both commercial use and adaptations.

We’re also celebrating the success of open policy worldwide. Fourteen countries have now adopted national open education policies, and open textbooks have saved students more than 100 million dollars. These are big moves making big impacts.

Please help us spread the word about this groundbreaking report.

If Creative Commons plays a role in how you use the internet or share your work, please consider making a gift to support the organization. Creative Commons licenses will always be free, but they would not exist without your generous support.

Can the OER acronym make us open? Thoughts on the „Opening Up Curriculum” report

European Open EDU Policy Project -

Babson Survey Research Group has recently published the results of its survey study „Opening the Curriculum: Open Education Resources in U.S. Higher Education, 2014″. The study (funded by the William and Flora Hewlett Foundation and with support from Pearson) is unique in providing a statistically valid, quantitative view of the ways that American academic staff understands and uses OERs.

The study drew the attention of the OER community by providing an objective measure of the awareness of OER. 5% of respondents declare they are „very aware”, 15% that they are „aware” and 14% that they are „somewhat aware”. Is that a lot, or too little? Commentators have focused on whether this is good, or bad news for OER (see David Wiley here or Phil Hill here, for example). In my opinion. that’s not the key issued raised by the study.

The challenge of defining „open”
The study raises the fundamental issue of OER as a concept that is at one hand difficult and unknown to educators – and which at the same time has to be used, if we are to promote a proper understanding of „open”. The report describes in details the difficulties of properly defining OER, for the purpose of the questionnaire. Authors note that if the term „open educational resources” is provided without an explanation, educators understand it to mean a broad range of freely available resources, most of which don’t meet any of the accepted OER definitions. On the other hand, a definition that uses examples to become more precise „proved too leading for the respondents, and artificially boosted the proportion that could legitimately claim to be ‚aware’.” In the end, they chose the following statement:

„How aware are you of Open Educational Resources (OER)? OER is defined as „teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use and re-purposing by others.” Unlike traditionally copyrighted material, these resources are available for „open” use, which means users can edit, modify, customize, and share them.”

The respondents were then asked to provide some examples, and to confirm their understanding of OER by choosing statements that they would use „to describe the concept of OER to a colleague”. Over 70% would choose availability for free, over 50% the ability to remix or ease of combining with other materials. Creative Commons licensing is mentioned most rarely (by 28% of respondents). The last item should be troubling for OER advocates, as free licensing is considered a necessary element of open education.

I don’t think that this is the correct way of measuring OER awareness. Respondents report their understanding of OER definitions only after having been provided with the very definition – which must lead to a bias. This is a general problem with the survey (and even more generally with the survey method) – it assumes a level of clarity in understanding OER, which in real life is not present among the studied faculty. Taking aside a narrow group (5-20% of respondents) who are clearly aware of OERs (and who can be asked specific questions), for other respondents the survey at the same time measures and builds awareness.

Can we speak about OER without mentioning „OER”?
One of the stranger results of the study is that those who know and use OER chose specific statements about the OER definition as often as those who are not aware of such resources. Similarly, only 34% of respondents are aware of OER (including those 13% who are „somewhat aware, but not sure how to use them”), while 50% of faculty declare use of OERs. And finally, if open licensing is commonly described as a key element of the OER model, then why only 1/3 of those who use OER consider it important? All these mysteries have a simple possible solution: academics simply don’t know what they mean, when they answer questions about „OER”).

(There’s a chance – small one, in my opinion – that people recognise OER by specific „brands” instead of the licensing model. It would have been interesting to ask about awareness of most popular OER projects, such as Openstax).

This is ultimately not a problem with the survey itself, as care was clearly taken to create a proper survey methodology. It is a problem faced by all OER advocates – in most cases, we’re not only promoting an alternative intellectual property rights model for education; we have to make educators aware of the very issue of IPRs. It’s an issue that many educators don’t understand or don’t care about – they either ignore it, or expect that it will be solved by their institution (In Poland, we gathered research data on this – but I assume that the issue is more or less similar around the world).

Without making them aware about OER, we cannot achieve change. So we have to make people care and worry about the very issue that we’d like to see becoming insignificant. Because without a „strict” understanding of OERs, we face open washing (to use David Wiley’s term), a dilution of the open model. Low declared awareness of Creative Commons licenses, and of their significance as part of an explanation of what OER is, shows just how difficult this task is. And the risk of openwashing will grow, the more OER become mainstreamed.

What I would do differently (suggestions for the next year study)
I think that a study of OERs should not map awareness of the concept itself. Similarly, asking respondents to declare willingness to use OER in the future offers little predictive power with regard to their future actions. Instead, we should try to map and understand practices around the use of resources by educators – and then decide whether they fit a definition of what OER is. For example, educators could be asked to keep digital „resource use diaries”, which could then be analysed, with links checked for open content licensing.

We also need to go beyond quantitative, survey methods – these are great for mapping well understood concepts. But when facing issues that are still being constructed in the society, qualitative studies are much more important. Surveys provide us with general driving directions instead of precise maps. Interviews and ethnographies could help to define the „real life understanding of open”, and see whether it overlaps with the formal definitions of open.

Some nuggets from the study (if you are still reading)
Cost is among the least important criteria used by academics for choosing resources. 3% worry about cost, while 20% care about ease of use, 50% about quality and 60% about efficacy. Obviously, these costs are not covered by instructors, who rationally do not worry about them. But the data suggests that the typical OER argument – „it’s free” – will not be convincing for educators.

Only 35% of educators are „very aware” about copyright (even fewer about Public Domain and Creative Commons). This is an extremely low value for a knowledge-intensive sector in a knowledge-based society. The survey asks about „licensing models” – but this is also, and more importantly, an issue of user rights.

Asked about deterrents to OER adoption, 1/3 of respondents mention lack of knowledge about permissions to use – which is the most shocking number for a study on resources with an explicit permission to use.

When asked about what types of resources they use, faculty members that declare OER use mentioned: images (89%), videos (88%), followed by video lectures and tutorials (60%). Ebooks and textbooks are relatively often used, but below the 50% mark. This suggests that some of the most often used OERs are incidental (images). It might also be a measure of a shift in American higher education away from traditional, printed resources. (It would be useful to collect similar data for non-open resources).

SciDataCon 2014 Recap

Creativecommons.org -

Photo by Puneet Kishor published under CC0 Public Domain Dedication

Earlier this month, CODATA and World Data System, both interdisciplinary committees of the International Council for Science, jointly organized SciDataCon, an international conference on data sharing for global sustainability. The conference was held Nov 2-5, 2014, on the campus of Jawaharlal Nehru University, New Delhi, India. Creative Commons Science had a busy schedule at the conference attended by 170+ delegates from all over the world, many from the global south.

Photo by Puneet Kishor published under CC0 Public Domain Dedication

We started early with a full day workshop on text and data mining (TDM) in cooperation with Content Mine. The workshop was attended by a mix of PhD students and researchers from the fields of immunology and plant genomics research. It was really rewarding to see the participants get a handle on the software and go through the exercises. Finally, the conversation about legal uncertainty around TDM appraised them about the challenges, but bottom-up support for TDM can be a strong ally in ensuring that this practice remains out of the reach of legal restrictions.

During the main conference we joined panel discussions on data citation with Bonnie Carroll (Iia), Brian Hole (Ubiquity Press), Paul Uhlir (NAS) and Jan Brase (DataCite) and international data sharing with Chaitanya Baru (NSF), Rama Hampapuram (NASA) and Ross Wilkinson (ANDS). We also participated in a daily roundup of the state of data sharing as presented at the conference organized by Elizabeth Griffin (CNRC).

SciDataCon, which used to be called CODATA, is held every two years, and is an important showcase of open science around the world. It is an important gathering for it brings together many scientists from the global south. A lot remains to be done to make real-time, pervasive data sharing and reuse a reality in much of the world, but there are heartening signs. At a national level, India’s data portal holds promise, but making data licensing information more explicit and data easily searchable by license would make it more useful. Citizen science projects in the Netherlands, India and Taiwan demonstrated how crowds can be involved in experiments while ensuring the user-generated content is made available for reuse, and SNEHA’s work on understanding perspectives on data sharing for public health research was particularly insightful of the value of listening to the feedback from participants.

We look forward to continue working with CODATA and WDS promoting and supporting open science and data initiatives around the world, and particularly in the global south, and hope for more success stories in the next SciDataCon.

Finnish translation of 4.0 published

Planet CC -

We are thrilled to announce our first official translation of 4.0, into Finnish. Congratulations to the CC Finland team, who have done an outstanding job. The translation team consisted of Maria Rehbinder of Aalto University, legal counsel and license translation coordinator of CC Finland; Martin von Willebrand, Attorney-at-Law and Partner, HH Partners, Attorneys-at-law Ltd: for […]

Finnish translation of 4.0 published

Creativecommons.org -

We are thrilled to announce our first official translation of 4.0, into Finnish. Congratulations to the CC Finland team, who have done an outstanding job. The translation team consisted of Maria Rehbinder of Aalto University, legal counsel and license translation coordinator of CC Finland; Martin von Willebrand, Attorney-at-Law and Partner, HH Partners, Attorneys-at-law Ltd: for translation supervision; Tarmo Toikkanen, Aalto University, general coordinator of CC Finland; Henri Tanskanen, Associate, HH Partners, Attorneys-at-law Ltd: main translator, and Liisa Laakso-Tammisto, translator. Particular thanks go to Aalto University, HH Partners, and the Finnish Ministry of Education and Culture for their support.

Maria Rehbinder, Martin von Willebrand, Tarmo Toikkanen, Henri Tanskanen, and Liisa Laakso-Tammisto; photo Mikko Säteri, CC BY

Internationalization was one of the 5 main goals of the 4.0 licenses, so this is an important milestone for the CC community. Our translation policy was written to reinforce that goal: if the licenses work everywhere, everyone should be able to use them in their own language without needing to worry about what the original English version says. The official translations are accessible to anyone, anywhere wishing to have access to the official legal text of the 4.0 licenses in Finnish.

Particular kudos go out to this team for their detailed work: producing linguistic translations is difficult! Many words don’t have exact equivalents between languages, especially where you’re bringing in specialized language from countries with different legal systems. Teams working on translations go through a detailed review of their work with CC to ensure that the meaning of the documents lines up. This often involves many detailed questions about exact meanings of words and the legal concepts they refer to, especially when no one on the CC legal team speaks the language. (If you’re particularly curious, you can look at some of the notes in the translators’ guide.) The Finnish team anticipated most of the questions we might have asked, providing a detailed explanation that will be useful as an example to others, and their thorough work has paid off.

Keep your eyes out: several more translations are in the final stages of review and will be published in the coming months! In the meantime, we join CC Finland in celebrating the launch of the first official 4.0 translation.

Read CC Finland’s announcement.

Representing the Public Domain at the EU Observatory on Infringements of IPR

Planet CC -

Last week Communia joined the “European Observatory on Infringements of IPR” which is hosted by the European Union’s Office of Harmonization in the Internal Market (OHIM). The Observatory’s task is to provide the EU Commission with insights on every aspect of IPR infringement. It does so primarily by conducting surveys and studies on how, where […]

CCANZ November newsletter

Planet CC -

Creative Commons NZ news The University of Canterbury has instituted mandatory deposit of academic research. Lincoln University has passed a wide-ranging Open Access policy. Kiwi author Thomasin Sleigh has published her novel Ad Lib under CC. Latest from NZCommons There are heaps of new articles over at NZCommons.org.nz! Learn about the Open Government Partnership, CC and the courts, and why Open Access is […]

Przegląd linków CC #156

Planet CC -

Otwarta edukacja 1. David Willey analizuje wyniki świetnego raportu z badań Babson na temat otwartych zasobów edukacyjnych na uczelniach wyższych w USA. Jeśli nie macie czasu na cały raport zajrzyjcie chociaż do jego wpisu, w którym pokazuje jak te wyniki pokazują jak otwarte zasoby edukacyjne muszą się rozwijać (ogromna ilość osób zaintresowanych korzystaniem z nich) i jak będą zapewne nadal […]

Te Papa’s openly licensed images

Planet CC -

The Museum of New Zealand Te Papa Tongarewa has now made nearly forty thousand images freely downloadable from its Collections Online digital database, giving the public access to the highest-resolution images it can and opening the way for creative reuse. Around twenty thousand of these images are ‘No Known Copyright’ but upwards of seventeen thousand […]

The Voyager Golden Record

Planet CC -

“Voyager Golden Record Cover Explanation” by NASA Jet Propulsion Laboratory – NASA Jet Propulsion Laboratory. Licensed under Public domain via Wikimedia Commons The Voyager Spacecrafts are carrying with them sounds of the earth, of our civilization, recorded on a 12″ gold plated copper disc, a golden record, along with instructions for how to play them. […]

The Voyager Golden Record

Creativecommons.org -

“Voyager Golden Record Cover Explanation” by NASA Jet Propulsion Laboratory – NASA Jet Propulsion Laboratory. Licensed under Public domain via Wikimedia Commons

The Voyager Spacecrafts are carrying with them sounds of the earth, of our civilization, recorded on a 12″ gold plated copper disc, a golden record, along with instructions for how to play them.

Lily Bui, a graduate student in the MIT Comparative Media Studies program built a lovely web site that allows everyone to enjoy the sounds and music from the golden record via an attractive, easy to use web interface. In a serial burst of inspiration, Lily has also dedicated her web site to the public domain via a CC0 Public Domain Dedication.

In her words, “To be perfectly frank — I mostly designed this mostly for myself so that I wouldn’t have to access the archival audio through the Library of Congress portal.” Well, turns out a lot of people share Lily’s point-of-view. Ever the academic, she was taking a course at MIT that “examined the ‘migration of cultural materials’ into the digital space, combining traditional humanities with computational methods.” She is convinced her work is grounded in theory. Perhaps, for we love the sounds and music so much that we have yet to read Humanities Approaches to Graphical Display by Johanna Drucker.

Join Lily and all of us at Creative Commons and give the Voyager Golden Record a listen.


Abonner på creativecommons.no nyhetsinnsamler