Category Archives: Education Online

Statistical methods for assessing agreement between two methods of clinical measurement


In clinical measurement comparison of a new measurement technique with an established one is often needed to see whether they agree sufficiently for the new to replace the old. Such investigations are often analysed inappropriately, notably by using correlation coefficients. The use of correlation is misleading. An alternative approach, based on graphical techniques and simple calculations, is described, together with the relation between this analysis and the assessment of repeatability.


Clinicians often wish to have data on, for example, cardiac stroke volume or blood pressure where direct measurement without adverse effects is difficult or impossible. The true values remain unknown. Instead indirect methods are used, and a new method has to be evaluated by comparison with an established technique rather than with the true quantity. If the new method agrees sufficiently well with the old, the old may be replaced. This is very different from calibration, where known quantities are measured by a new method and the result compared with the true value or with measurements made by a highly accurate method. When two methods are compared neither provides an unequivocally correct measurement, so we try to assess the degree of agreement. But how?

The correct statistical approach is not obvious. Many studies give the product-moment correlation coefficient (r) between the results of the two measurement methods as an indicator of agreement. It is no such thing. In a statistical journal we have proposed an alternative analysis, [1] and clinical colleagues have suggested that we describe it for a medical readership.

Most of the analysis will be illustrated by a set of data (Table 1) collected to compare two methods of measuring peak expiratory flow rate (PEFR).



The second step is usually to calculate the correlation coefficient (r) between the two methods. For the data in fig 1, r = 0.94 (p < 0.001). The null hypothesis here is that the measurements by the two methods are not linearly related. The probability is very small and we can safely conclude that PEFR measurements by the mini and large meters are related. However, this high correlation does not mean that the two methods agree:

(1) r measures the strength of a relation between two variables, not the agreement between them. We have perfect agreement only if the points in fig 1 lie along the line of equality, but we will have perfect correlation if the points lie along any straight line.

(2) A change in scale of measurement does not affect the correlation, but it certainly affects the agreement. For example, we can measure subcutaneous fat by skinfold calipers. The calipers will measure two thicknesses of fat. If we were to plot calipers measurement against half-calipers measurement, in the style of fig 1, we should get a perfect straight line with slope 2.0. The correlation would be 1.0, but the two measurements would not agree — we could not mix fat thicknesses obtained by the two methods, since one is twice the other.

(3) Correlation depends on the range of the true quantity in the sample. If this is wide, the correlation will be greater than if it is narrow. For those subjects whose PEFR (by peak flow meter) is less than 500 l/min, r is 0.88 while for those with greater PEFRs r is 0.90. Both are less than the overall correlation of 0.94, but it would be absurd to argue that agreement is worse below 500 l/min and worse above 500 l/min than it is for everybody. Since investigators usually try to compare two methods over the whole range of values typically encountered, a high correlation is almost guaranteed.

(4) The test of significance may show that the two methods are related, but it would be amazing if two methods designed to measure the same quantity were not related. The test of significance is irrelevant to the question of agreement.

(5) Data which seem to be in poor agreement can produce quite high correlations. For example, Serfontein and Jaroszewicz [2] compared two methods of measuring gestational age. Babies with a gestational age of 35 weeks by one method had gestations between 34 and 39.5 weeks by the other, but r was high (0.85). On the other hand, Oldham et al. [3] compared the mini and large Wright peak flow meters and found a correlation of 0.992. They then connected the meters in series, so that both measured the same flow, and obtained a “material improvement” (0.996). If a correlation coefficient of 0.99 can be materially improved upon, we need to rethink our ideas of what a high correlation is in this context. As we show below, the high correlation of 0.94 for our own data conceals considerable lack of agreement between the two instruments.


It is most unlikely that different methods will agree exactly, by giving the identical result for all individuals. We want to know by how much the new method is likely to differ from the old: if this is not enough to cause problems in clinical interpretation we can replace the old method by the new or use the two interchangeably. If the two PEFR meters were unlikely to give readings which differed by more than, say, 10 l/min, we could replace the large meter by the mini meter because so small a difference would not affect decisions on patient management. On the other hand, if the meters could differ by 100 l/min, the mini meter would be unlikely to be satisfactory. How far apart measurements can be without causing difficulties will be a question of judgment. Ideally, it should be defined in advance to help in the interpretation of the method comparison and to choose the sample size.

The first step is to examine the data. A simple plot of the results of one method against those of the other (fig 1) though without a regression line is a useful start but usually the data points will be clustered near the line and it will be difficult to assess between-method differences. A plot of the difference between the methods against their mean may be more informative. Fig 2 displays considerable lack of agreement between the large and mini meters, with discrepancies of up to 80 l/min, these differences are not obvious from fig 1. The plot of difference against mean also allows us to investigate any possible relationship between the measurement error and the true value. We do not know the true value, and the mean of the two measurements is the best estimate we have. It would be a mistake to plot the difference against either value separately because the difference will be related to each, a well-known statistical artefact. [4]

via Statistical methods for assessing agreement between two methods of clinical measurement.

Comparing methods of measurement: why plotting difference against standard method is misleading

My reasons for jumping into stats was to directly compare two measurement methods… with multiple trials, on multiple ILDs (inter-landmark distances).  I don’t really go for “funny name, lol” things, but when Bland and Borg are cited in the same paper on stats (which I long thought of [cluelessly/ignorantly] as boring).  Eponysterical.

But getting real, the issues raised by Bland and Altman sound pretty interesting, and they raise the issue that many tests of this sort may be using misleading information… I have tried to duplicate their methods in my own little H.T.-UGR/Inquiry Study.




When comparing a new method of measurement with a standard method, one of the things we want to know is whether the difference between the measurements by the two methods is related to the magnitude of the measurement. A plot of the difference against the standard measurement is sometimes suggested, but this will always appear to show a relationship between difference and magnitude when there is none. A plot of the difference against the average of the standard and new measurements is unlikely to mislead in this way. This is shown theoretically and illustrated by a practical example using measurements of systolic blood pressure.


In earlier papers [1,2] we discussed the analysis of studies of agreement between methods of clinical measurement. We had two issues in mind: to demonstrate that the methods of analysis then in general use were incorrect and misleading, and to recommend a more appropriate method. We saw the aim of such a study as to determine whether two methods agreed sufficiently well for them to be used interchangeably. This led us to suggest that the analysis should be based on the differences between measurements on the same subject by the two methods. The mean difference would be the estimated bias, the systematic difference between methods, and the standard deviation of the differences would measure random fluctuations around this mean. We recommended 95% limits of agreement, mean difference plus or minus 2 standard deviations (or, more precisely, 1.96 standard deviations), which would tell us how far apart measurements by the two methods were likely to be for most individuals.


via Comparing methods of measurement: why plotting difference against standard method is misleading.

amor mundi: Hannah Arendt on Technology and Nature

We have seen that the animal laborens could be redeemed from its predicament of imprisonment in the ever-recurring cycle of the life process, of being subject to the necessity of labor and consumption, only through the mobilization of another human capacity, the capacity for making, fabricating, and producing of homo faber, who as a toolmaker not only eases the pain and trouble of laboring but also erects a world of durability. The redemption of life, which is sustained by labor, is worldliness, which is sustained by fabrication. We saw furthermore that homo faber could be redeemed from his predicament of meaninglessness, the “devaluation of all values,” and the impossibility of finding valid standards in a world determined by the category of means and ends, only through the interrelated faculties of action and speech, which produce meaningful stories as naturally as fabrication produces use objects. If it were not outside the scope of these considerations, one could add the predicament of thought to these instances; for thought, too, is unable to “think itself” out of predicaments which the very activity of thinking engenders. What in each of these instances saves man — man qua animal laborens, qua homo faber, qua thinker — is something altogether different; it comes from the outside — not, to be sure, outside of man, but outside each of the respective activities. From the viewpoint of the animal laborens, it is like a miracle that it is also a being which knows of and inhabots a world; from the viewpoint of homo faber it is like a miracle, like the revelation of divinity, that meaning should have a place in that world.The case of action and actions predicament is altogether different. Here, the remedy against the irreversibility and unpredictability of the process started by acting does not arise out of another and possibly higher faculty, but is one of the potentialities of action itself. The possible redemption from the predicament of irreversibility — of being unable to undo what one has done though one did not, and could not, have known what he was doing — is the faculty of forgiving. The remedy for unpredictability, for the chaotic uncertainty of the future, is contained in the faculty to make and keep promises. The two faculties belong together in so far as one of them, forgiving, serves to undo the deeds of the past, whose “sins” hang like Damocles sword over every new generation; and the other, binding oneself through promises, serves to set up in the ocean of uncertainty, which the future is by definition, islands of security without which not even continuity, let alone durability of any kind, would be possible in the relationships between men.Without being forgiven, released from the consequences of what we have done, our capacity to act would, as it were, be confined to one single deed from which we could never recover; we would remain the victims of its consequences forever, not unlike the sorcerers apprentice who lacked the magic formula to break the spell. Without being bound to the fulfillment of promises, we would never be able to keep our identities; we would be condemned to wander helplessly and without direction in the darkness of each mans lonely heart, caught in its contradictions and equivocations — a darkness which only the light shed over the public realm through the presence of others, who confirm the identity between the one who promises and the one who fulfills, can dispel. Both faculties therefore, depend on plurality, on the presence and acting of others, for no one can forgive himself and no one can feel bound to a promise made only to himself; forgiving and promising enacted in solitude or isolation remain without reality and can signify no more than a role played before ones self.

via amor mundi: Hannah Arendt on Technology and Nature.

Screenhero | Collaborative Screen Sharing

Screenhero lets you screen share any application with anyone, no matter where they are. It’s super simple and blazing fast. You each get your own mouse pointer, and you’re both always in control. It’s designed for collaboration, not just broadcasting your screen. It’s like Google Docs for any application on your computer.

Screenhero is designed to feel like you’re sitting next to the person you’re working with — even when you’re miles away. It’s available for both Mac and Windows.

via Screenhero | Collaborative Screen Sharing.

Public Lab DIY Spectrometry Kit by Jeffrey Yoo Warren — Kickstarter

Public Lab DIY Spectrometry Kit by Jeffrey Yoo Warren — Kickstarter.

oreillymedia/open_government · GitHub

Wow, O’Reilly has made Open Government available to the public free of charge, really not much I could say beyond good guy does good thing.  Worth a read.

Open Government was published in 2010 by O’Reilly Media. The United States had just elected a president in 2008, who, on his first day in office, issued an executive order committing his administration to “an unprecedented level of openness in government.” The contributors of Open Government had long fought for transparency and openness in government, as well as access to public information. Aaron Swartz was one of these contributors (Chapter 25: When is Transparency Useful?). Aaron was a hacker, an activist, a builder, and a respected member of the technology community. O’Reilly Media is making Open Government free to all to access in honor of Aaron. #PDFtribute

— Tim O’Reilly, January 15, 2013

via oreillymedia/open_government · GitHub.

Bertrand Russell and F.C. Copleston Debate the Existence of God, 1948 | Open Culture

On January 28, 1948 the British philosophers F.C. Copleston and Bertrand Russell squared off on BBC radio for a debate on the existence of God. Copleston was a Jesuit priest who believed in God. Russell maintained that while he was technically agnostic on the existence of the Judeo-Christian God–just as he was technically agnostic on the existence of the Greek gods Zeus and Poseidon–he was for all intents and purposes an atheist.

via Bertrand Russell and F.C. Copleston Debate the Existence of God, 1948 | Open Culture.

The Horse, the Wheel, and Language – David W. Anthony – Book Review – New York Times

Prepare for a massive series on PIE.  Many folks love PIE.  Renfrew, Anatolia, Kurgan culture Gimbutas, Mallory… Will try to hit it all (just because this article is first, has no bearing on “ranking” of positions.  Just have to start somewhere (yeah, this is a poor place to start, didn’t want to bookmark, or lose the link, so hey!)  Actually, I will return and resequence/recontextualize once I decide on more articles to use.

Where Proto-Indo-European came from and who originally spoke it has been a mystery ever since Sir William Jones, a British judge and scholar in India, posited its existence in the late 18th century. As a result, Anthony writes, the question of its origins was “politicized almost from the beginning.” Numerous groups, ranging from the Nazis to adherents of the “goddess movement” (who saw the Indo-Europeans as bellicose invaders who upended a feminine utopia), have made self-interested claims about the Indo-European past.

via The Horse, the Wheel, and Language – David W. Anthony – Book Review – New York Times.

Child Labor & Lewis Hine – a set on Flickr

Working as an investigative photographer for the National Child Labor Committee (NCLC),Lewis Hine (1874-1940) portrayed working and living conditions of children in the United States between 1908 and 1924. 

The Library of Congress’ National Child Labor Committee Collection includes more than 5,100 photographs that came with the records of the organization. Many of the pictures are familiar, but others are relatively unexplored. Accompanying original captions, often rich with detail, offer clues for learning more about individuals, places, and work environments from a hundred years ago.

Do some of the pictures (or captions) seem heavy-handed? This was definitely photography with a purpose: to support the NCLC’s efforts to promote the “rights, awareness, dignity, well-being and education of children and youth as they relate to work and working.” Hine traveled to many parts of the United States, documenting children at work in factories, fields, and doing piece work at home. He also used the photographs to portray the consequences of child labor, including its impact on the health, safety, and education of the next generation. In some cases, the photographs suggest solutions, including organized and healthful activities for the nation’s youth.

The conditions Hine operated under were far from ideal. He referred to his work as “detective work,” and his captions often provide details of the names, ages, hours, and wages of the people he photographed, as well as the name of a “witness” who accompanied him. Supervisors and workers frequently regarded him with suspicion. Hurried work under these conditions may explain why some information Hine recorded has proven inaccurate.

It’s evident in many of the photographs that the workers were highly conscious of the camera. Nevertheless, Hine sometimes caught unguarded moments and playful interaction, as well as many memorable faces. Hine also used photographs to show what wasn’t there—for example, an almost-empty school at harvest time. And sometimes, whatever the photographer intended, the people in the photograph simply saw the occasion as an opportunity for a family portrait.

Did people see the photos at the time? The NCLC made a concerted effort to show the pictures to the public, including them in its own publications and placing them in newspapers and progressive publications. The photos also appeared in stereopticon slide shows and in displays that the NCLC circulated. 

We hope the photographs offer an opportunity for continuing exploration and reflection. 

Learn more:

• View background information.

• View sources for reading about Hine’s workand the National Child Labor Committee, including research about individuals in the photos.

• View a sample National Child Labor Committee report, showing how information on Maryland’s canning industry integrated references to the photos into the text.

• View an example of how the high resolution digital files available through the Library of Congress Prints & Photographs Online Catalog enable viewing of details (sometimes gory ones) that drive home the message of the photographs: “Bringing an NCLC Photo into Focus.”

• View the U.S. National Archives’ set of Lewis Hine photos from the Progressive Era.

Child Labor & Lewis Hine – a set on Flickr.

NPLdigital – YouTube

NPLdigital – YouTube.

Why videos go viral |video| @GrrlScientist | Science |

Evolution is happening every day. Evolution is occurring in plain sight. Every time a new niche appears or an existing niche opens up, evolution has the opportunity to go wild. Take human culture for example. Shortly after the internet popped up and created a new niche, popular culture has been undergoing a vast and increasingly rapid transformation. Instead of merely being consumers of other people’s idea of what “pop culture” should look like — a very few people — individuals now are beginning to define and set the trends. Thanks to the internet, regular people like you and I are now creating popular culture.

In this insightful and entertaining video, we meet Kevin Allocca, the trends manager at YouTube, who shares his observations about what makes a video into a phenomenon.

via Why videos go viral |video| @GrrlScientist | Science |

Screening Out the Introverts – Advice – The Chronicle of Higher Education

Screening Out the Introverts – Advice – The Chronicle of Higher Education.

Some years ago I joined my students in taking the Myers-Briggs Type Indicator, a test to determine personality type. It was an assignment in a course I was teaching on vocational exploration.

Assuming there would be an average distribution of results among the 20 students, I planned a series of small-group assignments in which they would discuss their own results for each of the test’s personality dichotomies (e.g., thinking versus feeling). But a problem turned up immediately: Not one student had received an “I” for introversion. Everyone, it seemed, was an extrovert (Myers-Briggs spells it with an “a,” like “extra”). Everyone but me.

Extroverts—if you accept such categories—are oriented outward, toward other people and toward action over reflection. They draw energy from social interaction, and they tend to be outspoken and gregarious. Introverts, on the other hand, are oriented toward the inner life of thought; they tend to be reserved and cautious. They find social interactions draining, and they need solitude to recharge. It’s not that introverts are antisocial so much as that they appreciate fewer, more intimate friendships. They don’t like small talk but appreciate deeper discussions.

I knew my students well enough to suspect that I was not the only one with that tendency. A third of them barely spoke in class unless called upon. A few hardly spoke to anyone. Perhaps the introverted choices on the test were too stigmatizing to consider (e.g., “Would you rather go to a party or stay home reading a book?”). The students had used the test to confirm that they had the right, “healthy” qualities.

Given that introversion is frowned upon almost everywhere in U.S. culture, the test might as well have asked, “Would you prefer to be cool, popular, and successful or weird, isolated, and a failure?” In the discussion that followed, a few students observed—with general agreement—that introversion was a kind of mental illness (and, one student noted, a sign of spiritual brokenness). “We are made to be social with each other” was a refrain in the conversation.

A few sympathetic students tried to persuade me that my introvert result was a mistake. How could I stand in front of that room, leading that very conversation, smiling at them, without being an extrovert? The answer: careful planning, acting, and rationing my public appearances. Also, my introversion fades when I become comfortable with unfamiliar people (the first weeks of classes are a strain).

We soon moved on to other personality dichotomies that were more evenly distributed. When the class was over, many of the students continued talking in an animated way about their results. Several left, silently, by themselves. The conversation left me exhausted; I went to my office and closed the door for an hour as I prepared for my next performance.


via Screening Out the Introverts – Advice – The Chronicle of Higher Education.

A Colder War – a novelette by Charles Stross

A Colder War – a novelette by Charles Stross.

via A Colder War – a novelette by Charles Stross.

China’s Loess Plateau Reclamation

The Loess Plateau in China’s Northwest is home to more than 50 million people. Centuries of overuse led to one of the highest erosion rates in the world and widespread poverty. Two projects set out to restore the Loess Plateau.

Background Information:

The project area covers 15,600 square km of land in nine tributary watersheds of the Yellow River on the Loess Plateau in Shanxi, Shaanxi, and Gansu Provinces, and the Autonomous Region of Inner Mongolia, China.

The Loess Plateau covers an area of some 640,000 sq. km in the upper and middle parts of the drainage basin of the Yellow River. Before the project, most of the project area consisted of severely degraded and barren land and low productivity slope land. The loess soil has good agricultural properties, but drought is a major constraint in crop production. Slope lands in the Loess Plateau produce extremely high levels of sediment runoff per unit area. Broad flat terraces for crops and narrow terraces for trees and shrubs are essential for profitable use of lands in the project areas. Per capita incomes in the project area are mostly below the poverty line.

The objective of the project is to help achieve sustainable development in the Loess Plateau by increasing agricultural production and incomes, and improving ecological conditions in tributary watersheds of the Yellow River, through: (a) the introduction of more efficient and sustainable uses of land and water resources; and (b) reducing erosion and sediment flow into the Yellow River. The project finances the integrated planning and treatment of small watersheds. The project creates high-yielding, level farmland for production of field crops and orchards and thereby replaces areas devoted to crops on erodible slope lands, and (b) plants the slope lands to a range of trees, shrubs and grasses for the production of fuel, timber and fodder. These measures increase per hectare productivity on the improved farmland, raise overall output and incomes, and have positive ecological impact. Comprehensive and integrated planning of individual watersheds in close consultation with the beneficiaries in the villages is a key aspect of the project.

This paper from February 2012 [PLOS PDF] quantitatively evaluates the effects of Grain to Green Program (GTGP) implementation on ecosystem services in the Loess Plateau region (Figure 1)

Prior to the GTGP implementation, the Loess Plateau was dominated by grasslands and farmlands. Between 2000 and 2008 the land cover patterns of the Loess Plateau changed remarkably. Woodland, grassland and residential land cover increased by 4.9%, 6.6% and 8.5%, respectively. Farmland decreased by 10.8% and desertification increased slightly, by 0.3% (Figure 2)

The increases in grassland and woodland were distributed along a northeast to southwest land strip (Figure 3).

The regional climate condition of the Loess Plateau region has exhibited a warming and drying trend. This climate trend was revealed from the analysis of time series data between 1951 and 2008, obtained from 85 weather stations located in the Loess Plateau region (Figure 4). Precipitation was found to decrease annually by an average of 0.97 mm and temperature was found to increase annually by an average of 0.02°C.

Regional water yield decreased after the implementation of the GTGP. Over half of the study area (northeast to southwest of the Loess Plateau) experienced a decrease in runoff (2–37 mm/year) with an average 10.3 mm/year decrease in runoff across the whole Loess Plateau over the 2002–2008 period (Figure 5)

Soil conservation in the Loess Plateau, represented as a decrease in soil erosion, has improved since 2000 as a result of vegetation restoration (Figure 6).

The spatial variation of carbon sequestration in the Loess Plateau is shown in Figure 7.

The time and rate of the gross production change appeared to occur later and more slowly than the grain productivity change (Figure 8). Actual grain production increased across the whole of the Loess Plateau at a rate of 18% between 2000 and 2008.

Table 1. Rainfall erosivity and soil retention characteristics in the Loess Plateau region from 2000 to 2008.

Table 2. Area of cropland converted to forest (grassland) and the carbon sequestration by vegetation, soil and ecosystems in Loess Plateau between 2000 and 2008.

Hope in a Changing Climate
While serving as executive director of the Environmental Education Media Project Jonathan Halperin managed the creation of Hope in a Changing Climate, the award-winning documentary screened in Copenhagen at COP-15 and broadcast globally by BBC World.

The story of successful large-scale ecosystem restoration that is shown in the film, narrated by EEMP founder John D. Liu, continues to inspire audiences around the world and has been translated into French, Russian, Chinese, and numerous other languages.

Origin and Formation of Loessal Soils

Understanding the conditions of the environment that people of the Loess Plateau inhabit is a necessity, and with this acknowledgement comes the need to establish what loess actually is. Containing more nutrients than sand, it is also much finer. Its silt-like nature is noted as being among the most erosion-prone soils known on the planet (Jiang 18). Loess is also extremely sensitive to the forces of wind and water, bearing the dubious honor of being blown or washed away quicker than any other soil type (Pye 125).

Prior to this century, loess was a compelling mystery for geologists seeking to know its origins. Former theories included that loessal deposits were beds of ancient oceans, and even that they were composed of cosmic Saturn-like rings of dust that may have once encircled the globe but somehow rained down in pockets (Pye 237). Into the 19th Century, however, an apparent agreement was shown between the timing of noticeable waves of loessal sedimentation and the glaciation of the northern hemisphere (Smalley 358).

The Loess Plateau was formed in waves between 2.4 and 1.67 million years ago, helped along by the uplift of the Tibetan Plateau, the movements of several huge glaciers across desert

regions, and strong winds maintained by a high-pressure system in a cold and dry continental interior (Meng and Derbyshire 141). It is the world’s largest deposit of loess, approximately the size of France, designated by the large black area in Figure 2 below (Yoong 95).

Of all the factors contributing to soil erosion in the Loess Plateau Region, including desertification, wind erosion, violent rainstorms, and earthquakes, the most significant overall has been irrational land use (Bojie et al. 732). Slopeland, although much less stable than the level “yuan” tables (See Fig. 4), is continuously cultivated out of sheer need for increasing amounts of agriculture. These plowed slopes account for as much as 70% of soil loss in the region (Luk 23). However, it is not enough to simply declare the Loess Plateau inhabitants irrational. Many factors contribute to their use of the land in such a way.

Lev Semenovich Berg was born in Bendery, in Moldova. He had great success as an ichthyologist and geographer; he also proposed, in 1916, an interesting theory of loess formation. As a biologist he was persecuted by Lysenko and the Soviet state in the time of pseudo-science in the 1930s and 1940s. Despite his being persecuted, the loess theory beca- me, in effect, the official Soviet theory of loess formation. This theory had to be compatible with his ‘landscape’ theory which did not find favour in Marxist-Leninist geography. Berg’s loess theory was very much a geographical theory, as opposed to the geological theory of aeolian deposition, which was accepted outside the Soviet Union. Berg was hugely successful in many fields, but his contributions to loess science tend to be neglected. His ‘soil’ theory of loess formation has been widely disparaged but still has some influence in Russia. The concept of loessification may still be relevant to the later stages of deposit formation; the slow transition from metastable to collapsible may be best described as loessification.

The Lessons of the Loess Plateau, part two, part three, part four, part five, part six.

SOPA! How it could destroy public intellectualism, and the furtherance of an educated, creative, developing society.

UPDATE: The White House has begun to respond to the protesting voices of American Citizens (ironically, united, but not by Citizens United) 

This is a list of sites taking action against SOPA on the 18th of January.

This post will be an updating one, currently I am just trying to collate information, links, and materials, as well as sources relating to the primary actors in this game… the politicians advancing such a dangerous, ignorant, reactionary and anti-modern, anti-liberty, anti-freedom based laws.

Currently, no ideas in this post are my own (for some digging I did on copyright a while back, check here), I will work to compose my own thoughts, but I thought that was completely secondary to linking OTHERS to the information to create a cogent case against such techno-phobic, corporately sponsored ignorance in our nations lawmakers.  I will note now only that SOPA is emphatically not “simply” a law to “stop piracy”… as written, it will break the internet.  I know many will already be aware of the issues presented, but consider that this is not simply a matter of “them foolish kids downloading the new hot Britanny Spears Track”… this law will DETRIMENTALLY impact the free flow of information between scholars, public intellectuals, and people who are hungry for rational, logical debate and information transmission.

If we wanted twitter to be censored, and to have our information fed through a blender that filters out anything negative about our government… we might simply go to China.   It is pathetically ignorant to, on one hand spew spittle flecked invective at “human rights abusing” China, and to sneer about how “Chinese citizens are not free, and do not have free thought or expression”… well, our distinguished “leaders” have decided that, hypocritically, that is what their vision of America entails.


“A growing tension between rights holders and libraries”

Butler mentions three copyright infringement lawsuits against universities and their libraries—the Georgia State e-reserves case, the AIME vs. UCLA video streaming case, and Authors Guild v. HathiTrust. “These lawsuits reflect a growing tension between rights holders and libraries, and some rights holders’ increasingly belligerent enforcement mentality,” Butler wrote. (On September 14, the LCA released a statement [PDF] in support of HathiTrust and its research library partners in the Authors Guild v. HathiTrust case.)

What can you do now?  The EFF offers this toolkit for anti-sopa activists;

  1. Call your Senators and Representative and tell them to oppose Protect-IP and SOPA, respectively.  Click here for some suggested talking points. Then tell your friends about the call on social media sites.
  2. Contact Congress through EFF’s action center.  Customize your letter to explain who you are and why you are worried about this bill. If you’re outside the United States, try this petition from Fight for the Future instead.
  3. If you work for a tech company, approach the leadership at your company and explain to them your concerns. Urge them to join you in speaking out. These companies (PDF) already took a stand.
  4. Write a blog post about the blacklist bills.  Whether it’s a candid explanation of why you oppose the legislation, a discussion of the effect on human rights, or a call to filmmakersto protest the blacklist, there are plenty of things to say about this scary legislation. Help us get the word out by writing articles on your own blog, your school blog, or on blogs that take guest contributors.
  5. Are you an artist? Showcase the dangers of censorship through art and music, and use your art as a way of reaching people who might otherwise not know about this issue. You can make stickers, posters or patches, create a YouTube video, or hold an open-mic night around censorship.
  6. Do you administer a website? Then put a banner on your site protesting censorship or link to EFF’s action center.
  7. Coordinate a teach-in or debate at your local college or community center. Invite local experts in copyright and free speech to come discuss the issue.
  8. If you’re in high school, talk to your civics and media studies teachers about a class discussion on the implications of this bill. Point them to our free Teaching Copyrightmaterials.
  9. If you’re in college, speak out through like-minded organizations working for digital freedom, such as Students for Free Culture or Electronic Frontier on Campus. If there isn’t a chapter at your school, start one. Then use that platform to coordinate with other students to speak out against this bill.
  10. If you’re in college, set up a meeting with your college newspaper editorial board and explain the bill to them and why they should speak out about it. Work with them to write articles on the topics. Check out these examples from the University of BuffaloUniversity of Massachusetts, and University of Minnesota.  See more examples at the Center for Democracy and Technology’s Chorus of Opposition page.
  11. Write a letter to the editor of your local paper. Remember, these are often really short. Find out the requirements for your local paper and follow them carefully.
  12. Become a member of EFF. We’re leading the fight to defend civil liberties online, so that future generations will enjoy an Internet free of censorship. By standing together, we can make it happen.


The Senate’s PROTECT IP Act and the House’s Stop Online Piracy Act (SOPA) are so noxious that even the Business Software Alliance has serious reservations, and SOPA’s main backer had to take to the virtualpages of National Review today to quell a growing revolt among his conservative colleagues about “regulating the Internet.” Whatever you think of the legislation, it unquestionably represents a sea change in the US approach to the Internet, one which explicitly contemplates widespread website blocking and search engine de-listing.

The level of debate on an issue this important has been… suboptimal. (And hearings have been rather lopsided affairs). Just listen to the rhetoric of SOPA author Lamar Smith: “Enforcing the law against criminals is not censorship.” Pithy, sure, but it doesn’t relate to any actual objections put forth by thoughtful critics.

But rightsholders do need some means of enforcing copyrights and trademarks, something tough to do when a site sets up overseas and willfully targets American consumers with fake goods and unauthorized content. Some sites can be leaned on when hosted in friendly countries, but many simply thumb their nose at US law with impunity. If you can’t go after the sites at the source, and you can’t lure their operators to the US (both tactics used with success in other cases), what’s left but blocking site access from within the US?


OPEN: Online Protection & Enforcement of Digital Trade Act

The OPEN Act secures two fundamental principles. First, Americans have a right to benefit from what they’ve created. And second, Americans have a right to an open internet. Our duty is to protect these rights. That’s why congressional Republicans and Democrats came together to write the OPEN Act.


One of the many serious problems with the Stop Online Piracy Act (“SOPA”) (pdf) is how it tacks itself onto existing law to expand liability to people who may be three times removed from any actual copyright infringement. In § 103, SOPA wraps another layer of liability around what are called the “anticircumvention provisions” of the Copyright Act (which are found in section 1201 of the Copyright Act). The goal of the anticircumvention provisions is preventing people from circumventing technology that protects copyrighted works. Importantly, however, some courts have held that § 1201 prohibits circumvention even when the person’s ultimate use of the work does not infringe copyright. So if you circumvent technology to access a work in a way that’s completely legal, you might still be violating § 1201. If SOPA is passed, even more individuals and entities will get caught up in an ever-expanding net of liability, which is especially ridiculous when we’re talking about a provision of the law that may not even require actual copyright infringement.

SOPA’s Ever-Expanding Net of Liability

Earlier this week, the Library Copyright Alliance (LCA)—made up of the American Library Association, the Association of Research Libraries (ARL), and the Association of College & Research Libraries—released an open letter [PDF] to Sen. Ron Wyden (D-OR), Rep. Darrell Issa (R-CA), and Rep. Jason Chaffetz (R-UT), “welcoming [the] release” of a discussion draft bill the legislators have sponsored. Called the Online Protection and Enforcement of Digital Trade (OPEN) Act, the bill has been touted as a potential alternative to SOPA.

Though SOPA [PDF] is primarily aimed at combating copyright infringement by foreign websites, many observers have taken issue with the enforcement methods described in the bill, which could have far-reaching effects—including in the library world. (Yesterday, 83 Internet engineers and inventors wrote an open letter to Congress saying that SOPA and similar legislation “will create an environment of tremendous fear and uncertainty for technological innovation.”)

On November 8, Brandon Butler, ARL’s director of public policy initiatives, wrote another open letter on behalf of the LCA criticizing two provisions of SOPA’s Section 201. One of them, he wrote, could expand the definition of “willful” copyright infringement to potentially include cases where a person (or organization) believed in good faith that its infringing conduct was lawful; such “innocent” infringement carries much smaller potential for monetary penalties than willful infringement does. (“In cases of willful infringement, the court can increase the statutory damages to $150,000; in cases of innocent infringement, the court can reduce the statutory damages to $200,” Butler wrote.)

Brandon Butler, Director of Public Policy Initiatives, at ARL talks SOPA with CNN’s Brian Todd.

Direct to Video Interview (via ARL Policy)

What’s Wrong With SOPA?

On Digital Preservation Techniques

If you take any given link (or all outgoing links on the blue [or a triage of links as suggested in a modified “MoSCow” method here, starting with the places that always kill links quickly, like, if any still get posted, yahoo!]), and paste it in the box HERE (wayback machine, Beta), and click “show latest”, it automatically has take a snapshot, at that time, and then in a month or so, it will be permanently in the archive (which looks like this)… which can then be queried by any of many tools. So, basically, is there a way to get a computer to strip and copy links, paste them there, and then “press” a button on a web-page? Or is one of these tools more appropriate for this “archiving” task (Web Curator toolFirefox Page-Saver/Scrapbook plugin).

I should also clarify, the memento project is not for the “archiving” part, it is for the navigation, and interconnection of the disparate “archive-sources” — after they are captured; such as, open source crawling tool.

-these resources might help anyone who is looking at this major problem with web architecture, and thinking they want to “do something”. Links via “A Guide for Archiving Web Pages

So one would find a way of making auto-archivisation of Mefi outgoing links first, then on a server, would do something like link to “memento/timeportals” (or something, it is explained more clearly here [Having your server link to will cause Memento clients to talk to the timegate aggregator, which will check 10+ public archives for the appropriate pages. This of course assumes that public archives have been crawling your site; if the site is very new it
might not have been crawled & archived yet.
])… which then parses the archives, and sees which, if any, possess the proper resources.

The following terms specific to the Memento framework are introduced here:
Original Resource: An Original Resource is a resource that exists or used to exist, and for which access to one of its prior states is desired.
Memento: A Memento for an Original Resource is a resource that encapsulates a prior state of the Original Resource. A Memento for an Original Resource as it existed at time Tj is a resource that encapsulates the state that the Original Resource had at time Tj.
TimeGate: A TimeGate for an Original Resource is a resource that supports negotiation to allow selective, datetime-based, access to prior states of the Original Resource.
TimeMap: A TimeMap for an Original Resource is a resource from which a list of URIs of Mementos of the Original Resource is available.

The original poster might find this site interesting and on-topic, it is created by the Library of Congress Web Archives, it is the “minerva archive”, which has a whole lot of archives from immediately pre 9/11, and then also many from after… it essentially documents “how” America, and the world used the internet both during 9/11, and in the aftermath. And hereare is the list of other LCWA topics.

Oh, wow, thissiteisincredible.
Spanamwar.comAction Reports and First Hand Accounts, Diver Charles Morgan, USS NEW YORK Describes his Descent into the MAINE (*graphic description of the results of war). Not sure what the “Battleship Maine” is? No excuses now. Via “single sites archive“. Gratuitous image of awesome three dollar billContinental Currency… seriously, are archives actually singularities, from over the event-horizon of which my time may never return?

Napster an echo of an era, the unimportance of the sonic boom, and the rise of pirated ebooks

A very well documented backgrounder on the issues, facts and positions on how the world of digital distribution would look going forward is provided in a research paper from Lynette White and Sean Elliott, two Australian researchers, titled “Large-scale Copyright Infringement: the Inevitable Consequence of the Digital Age”.  Below is an excerpt from some elements descriptive of the issues at play in the multi-modal revolutions occurring (the whole paper from 3 June 2001 is a must read for the context and background on the shift towards accepting the importance of the digital realm on content, content distributors, and content consumers [though they do seem to make several judgmental, seemingly unreferenced statements regarding their asserted “motivation” for copyright violations, I was unaware that human nature had been distilled to a set of theories]).  First as Eliot and White define the stakeholders, they see three prime interlocutors, each with different interests, power, and stakes, but each being key to the continued existence of each, and the others;

The user — the target audience for Shawn Fanning was college and university students, and they are probably the largest user group of Napster.  A characteristic of this part of the population is high computer literacy.

 The artists — Alanis Morissette and Don Henley are two that have spoken out about the artists views, in a debate that has largely over looked their perspective, and focused strongly on the user, and the music industry.

 The industry — music publishers, and music recording companies, the big five companies are EMI, Sony, Universal/Vivendi, AOL Time Warner, and Bertelsmann.


Eliot and White perceive the digital distribution conception as a revolution, potentially akin to the societal reformations, and economic landscape reshaping which came parceled with the industrial revolution.  Here, they describe the roots of the coming, and still occurring changes;

The inevitable revolution had its seeds in three areas. The introduction of the digital medium has made it possible for music to be successfully distributed without loss of quality. The networked media meant that a greater number of people had easy access to digital music, and facilitated simple distribution.  Finally, human nature has contributed, as “there is no way that anyone can fight technology that gives us more instant gratification with less energy expended.”   1)The nature of the digital medium, 2)The nature of digital networks, 3)Human Nature


To understand where we are today in terms of the discussion of copyright material, and open access one must first go back to the service that brought the idea of digital distribution into the mainstream consciousness;

In 1999, an 18-year-old college dropout named Shawn Fanning changed the music industry forever with his file-sharing program called Napster.

His idea was simple: a program that allowed computer users to share and swap files, specifically music, through a centralized file server. His response to the complaints of the difficulty to finding and downloading music over the Net was to stay awake 60 straight hours writing the source code for a program that combined a music-search function with a file-sharing system and, to facilitate communication, instant messaging. Now we have Napster, and people are pissed.


Perhaps the most under-examined issue is the complex labyrinth of the large media publishing corporations, and their eager conglomeration; yes, we can generally all agree that artists starving and dying in the poor house is not a positive outcome.  When we learn that artists are starving not only because of so-called pirates, suddenly the behaviors, actions, and arguments of the publishing side of the equation become more worthy of a second glace with a critical eye (was the “piracy generation” really just the “fault” of bad parenting, or a ‘vile’ selfish human nature excuse?  Are there perhaps two [or even three] dancers at the pirate party ball?).  A wider perspective may come to recognize that in the zeal of “protecting artists” (generally conceived of occurring by protecting ‘profit margins’), the publishing groups built a system which was so entrenched and resistant to change, advancement, or evolution that it cast innovators, and entrepreneurs as “bad guys”, and criminals, rather than partners; as everyone came to do once “apple itunes Music Store” joined the digital distribution jig on April 28, 2003, with ten billion sales in under seven years

Some recent data, and artist anecdotes actually suggest that the RIAA, and other industry bodies worked counter to the interests of the actual creative people (not to mention the modern societies which benefit the most from a healthy, thriving creative culture); by complex licensing schemes, where artists can receive nothing for uses of their material on soundtracks, and other multi-media projects, or “Hollywood” accounting (the well documented practice of claiming “losses” on properties which make blockbuster profits [, and also a wide variety of case studies may be found here,]), using “artist advances” like drug dealers use “Free” drugs (particularly hooking young, and new artists on extravagant lifestyles, not making clear how much the artist actually owns, and leaving the artist footing the bill [or rather, paying off the bills constructing complex contracts demanding multiple albums, on a schedule, ultimately leading to many poor albums, rushed artistry, artists pushing out disposable quality material just to satisfy contractual obligations] for several quotes from artists on the issue see this roundup of several comments on “file sharing”,

What was shown to clearly be a profitable, and stable business model by the apple itunes music store was resisted, and even trumpeted as a death knell of all creativity under Napster.  Once the illegitimate, or biased, and often misleading rhetorical flourishes and arguments of the RIAA were disposed of, or rather, pushed aside, it became more and more clear that a majority of people were absolutely willing to pay their favored creative artists for access to the material – if they were offered the opportunity, as the RIAA had forcibly denied them for a period from the late 1990’s until around 2003, a period of several particularly pernicious and vicious “anti-pirate” lawsuits, this period was one of the worst for the reputation of the recording industry, “The record companies have created this situation themselves,” says Simon Wright, CEO of Virgin Entertainment Group, which operates Virgin Megastores (from a Rolling Stone article which has fallen into the pit of broken links on the internet, which I was able to make accessible using the incredible “Memento Project”,

What fans showed the industry they were not willing to do was to operate by the terms what amounts to basically massive multinational content packagers.  The “piracy” waves of the late 1990’s, and early 2000’s put on display a consumer body which was willing to step “outside” the law, and the “established” norms of the business model that the record companies desired.  People, whether cognizant of the legalities or not made it known that they had seen a new way of accessing various cultural artifacts (digitally), and they were no longer willing to be gouged on the prices, and sub-par failings (optical media is useless with even the most minor imperfection [show me a 10 year old CD with no scratches, and I will show you a CD that has never been used, even then, I have seen brand new optical media with fatal imperfections], while digital media takes much more abuse to degrade quality, if at all) of Compact Disc albums, or Digital Video Disk movies, or DVD based PC games (and the associated rootkits, ‘anti-piracy’ measures, password encoding, and other value reducing measures which made it more difficult for legitimate owners to enjoy the media they legitimately purchased, while, on the other hand, pirates got a much more pleasurable, non-invasive, non-debilitated experience [see Appendix 2 for a visual example of this “lower quality to those who actually pay for media”] also an experience which didn’t accuse the user at every turn of piracy, and a litany of “just in case you are actually an evil pirate” warnings, and pre-emptive countermeasures).

This detailed summary of the events provided by the University of Florida Interactive Media Lab (where students have been creating online digital projects since 1994), from the spring of 2001 allows for an examination of the timeline, and addresses some of the prevailing positions of the time;

On March 5th, 2001, Judge Marilyn Patel issued a revised injunction consistent with the February 12th decision by the Ninth Circuit Court of Appeals in this case.

While compliance issues and other matters continue to be sorted out in the aftermath of these rulings, many Netziens have continued their file sharing practices via the Gnutella Network.

Highlights of the March 5th Injunction

· Napster is enjoined from “engaging in, or facilitating others in, copying, downloading, uploading, transmitting, or distributing copyrighted sound recordings…”

· However, “the Ninth Circuit held that the burden of ensuring that no copying, downloading, uploading, transmitting or distributing of plaintiffs’ copyrighted works occurs on the system is shared between the parties. The court ‘place[d] the burden on plaintiffs to provide notice to Napster’ and imposed on Napster the burden ‘of policing the system within the limits of the system.’

· The Record Industry Plaintiffs must “provide notice to Napster of their copyrighted sound recordings by providing for each work:  (A) the title of the work;  (B) the name of the featured recording artist performing the work (“artist name”);  (C) the name(s) of one or more files available on the Napster system containing such work; and (D) a certification that plaintiffs own or control the rights allegedly infringed.”

· “All parties shall use reasonable measures in identifying variations of the filename(s), or of the spelling of the titles or artists’ names, of the works identified by plaintiffs. If it is reasonable to believe that a file available on the Napster system is a variation of a particular work or file identified by plaintiffs, all parties have an obligation to ascertain the actual identity (title and artist name) of the work and to take appropriate action within the context of…[the March 5th]…Order.”

· “Once Napster ‘receives reasonable knowledge’…of specific infringing files containing copyrighted sound recordings,” it shall, “within three (3) business days, prevent such files from being included in the Napster index (thereby preventing access to the files corresponding to such names through the Napster system).”


On the “other side”; a clear statement of the ideas, and history behind Napster were on display when Napster’s interim CEO Hank Barry addressed congress April 3, 2001 following the court injunction of a month prior;

Finally, the Napster community says loudly and clearly that it wants artists and songwriters to be paid. I think that the license you create should include a direct Internet rights payment to artists. There is certainly precedent for this in the so-called “writer’s share” of public performance (radio and television) payments that are collected by ASCAP and BMI. As you know, a portion of those payments goes directly to the songwriter.

Senator, this is a moment of tremendous opportunity. For many years, our nation and this Committee heard wonderful promises of an emerging Internet music era, where people could have convenient access to the entire catalog of recorded music over the Internet at the touch of a button.  Well, as often happens, history arrived ahead of time.

And it is a uniquely American story.  A young man with no standing, no credentials, no connections, and no plan for placating the powerful, sat down outside Boston and created an entirely new system.

Within 18 months, we were no longer debating whether there would be music on the Internet, but rather debating the best way to make sure that it continues. More than 60 million people have started a new stage in our national love affair with music. All of us are finding new music – and music we’d forgotten how much we loved.

The question before this Committee is a matter of policy: how to make this new world of Internet music work. The next step should not be shutting it down. The Congress has effectively promoted new technologies in the past, while ensuring that creators benefit; it is essential that you do so again today.


Copyright requires a constant balance between the public’s interest in promoting creative expression and the public’s interest in having access to those works. This is a balance that has often proven impossible to find without the help of the Congress.


The issues surrounding digital pedagogy are intimately bound up with the topic of copyright, and open access.  The topic appears to be much more dominated by rhetoric and bloviation when the focus is solely on addressing the “stealing” of the newest hot pop music.  But when one steps aside from the red herring that is “pop-music piracy”, and begins to examine the wider ideas behind open access, and the new realities of digital distribution channels, the realities of digital pedagogy, and the rising importance of the digital realm to society, it becomes apparent that digital music access is merely one of a myriad of topics that demand attention, open discussion, and dialogue.  Among the millions of people accessing digital tools, and digital resources, the listening of their chosen cultural artifacts is but one use; within which, yes, some people will just want the new “hot” Britany Spears or Metallica single for free (one might wonder, is this stolen single a “lost” sale?  Would the pirate have actually purchased any of the songs they have?), and stealing certainly is easier today, to get the new single, compared to trying to steal a physically manufactured product, which was transported to a music store, which pays employees to stock shelves and perform sales tasks, some people simply will not care if they harm the artists, and the support structure that creates such mega-stars, and allows for the infrastructure of the content industry.  But then, can we see the “pirated single” sort of like the “radio broadcast recorded to a home tape player” as many respectable, honest, and legitimate people openly admit doing in the bygone analog era.

Here, at this point we must seek to examine the RIAA claims, just what is the scope of the problem according to the RIAA (See also Appendix 3 for a chart on bandwidth usage by use function, as determined by the 2011 Envisional study)?

Music theft is a real, ongoing and evolving challenge.  Both the volume of music acquired illegally and the resulting drop in revenues are staggering.  Digital sales, while on the rise, are not making up the difference.

Consider these staggering statistics:

-In the decade since peer-to-peer (p2p) file-sharing site Napster emerged in 1999, music sales in the U.S. have dropped 47 percent, from $14.6 billion to $7.7 billion.

-From 2004 through 2009 alone, approximately 30 billion songs were illegally downloaded on file-sharing networks.

-NPD reports that only 37 percent of music acquired by U.S. consumers in 2009 was paid for.

-Frontier Economics recently estimated that U.S. Internet users annually consume between $7 and $20 billion worth of digitally pirated recorded music.

-According to the Information Technology & Innovation Foundation, the digital theft of music, movies and copyrighted content takes up huge amounts of Internet bandwidth –  24 percent globally, and 17.5 percent in the U.S.

-Digital storage locker downloads constitute 7 percent of all Internet traffic, while 91 percent of the links found on them were for copyrighted material, and 10 percent of those links were to music specifically, according to a 2011 Envisional study.

While the music business has increased its digital revenues by 1,000 percent from 2004 to 2010, digital music theft has been a major factor behind the overall global market decline of around 31 percent in the same period.  And although use of peer-to-peer sites has flattened during recent years, other forms of digital theft are emerging, most notably digital storage lockers used to distribute copyrighted music.


Where the rhetorical flourishes of the RIAA fall apart however, are in the claims, and assertions that “all people looking for digital files” were simply thieves, cheats, and criminals.  When we step past the “pop” nature of some of the traded files, which is used to argue that there is “no value”, or “mindlessness” in that which was traded, particularly on the original Napster, and being cracked down on in much the manner that faced first music traders, then movie traders, then ebook traders, and now scholarly research traders, as we see that academic work is also now traded.  And now here, the focus turns to digital pedagogy, and the importance of open access to the education systems of today.  How “experimental” will teachers be, in sharing ideas, and links not vetted by a lawyer, above students “grade level”, or as further readings; will teachers feel brave, and share, and discuss online, or openly, the ideas of the most current academic work?  Or are teachers more likely to retreat, and be terrified of “slipping up” in the realm of copyright (was that 10 pages or 10 % of a work; is it 10% of the work if it originally was in a book, but was republished as a standalone article?) particularly when the potential fines are not “small”, but rather exorbitant, possibly career ending, and bloated massively beyond the “cost of access”.  And it has been happening since I was in elementary and high school, teachers would have access to ideas and messages which were important to helping students grow into good citizens, able to comprehend, and interact with the complex ideas being taught, but they were “afraid” of simply sharing these materials with their classes, because the legalities were simply to complex, and no, teachers were trained as teachers, in pedagogy, not as lawyers, so it is bizarre that our modern society seems to demand legal knowledge in realms which are hindered to the point of restriction by the legal labyrinth involved (seeing young people making “dancing in apple stores” videos, or the prime example, referenced work (and best displayed in the “Tales from the Public Domain:
BOUND BY LAW?” available here with open access; and facing legal challenges for not having “cleared” the annoying background music (which happens to be pop song X); how is it promoting creative works, reinterpretation, cultural growth and evolving ideas when new takes on old ideas are relegated to either being described as deprecated derivative secondary works worthy of sneering, or simply criminal, entombed in legal labyrinths.

A documentary is being filmed. A cell phone rings, playing the “Rocky” theme song. The filmmaker is told she must pay $10,000 to clear the rights to the song. Can this be true? “Eyes on the Prize,” the great civil rights documentary, was pulled from circulation because the filmmakers’ rights to music and footage had expired. What’s going on here? It’s the collision of documentary filmmaking and intellectual property law, and it’s the inspiration for this new comic book. Follow its heroine Akiko as she films her documentary, and navigates the twists and turns of intellectual property. Why do we have copyrights? What’s “fair use”? Bound By Law reaches beyond documentary film to provide a commentary on the most pressing issues facing law, art, property and an increasingly digital world of remixed culture. This book is available under a Creative Commons Attribution-NonCommercial-ShareAlike license. This license allows you to translate the comic for free – read the Portuguese translation, the French translation, or the new Italian translation of the comic!


Provided with avenues to pay musicians (itunes music store, amazon music stores, soundcloud, Last.FM, The SixtyOne, StereoFame, and numerous multiple competing services [including the phoenix-like reborn legal Napster], once a method of payment, or avenue for giving back to artists was provided, the evidence shows that people desired to financially support their favorites, and were more than willing to make payments in exchange for access to their choices of music, on their terms; i.e. files unencumbered with malware (as Sony BMG was found to have placed root kits on many of their Compact Disc releases [] which, if any old black-hat hacker had caused to be installed, would most certainly land fines, jail time and more), or other examples of malicious, and otherwise computer harming software DRM (Digital Rights Management),

I suppose no rational argument matters to some; according to the RIAA; “There are more than 13 million legal tracks online today” (  They then list numerous services licensed by the major record companies to sell music online, is this new reality in contradiction to their “creativity apocalypse” warnings of the last decade, and plastered all over their site and associated materials?  The RIAA is now selling the model of digital distribution (at approximately fair prices) that the first generation labeled as “pirates” tried to force on the music industry.  The RIAA has gone from calling people criminals for considering this model, and trying to force this model, to embracing the very same models of digital distribution and sale.  Because all along besides the “music just wants to be free” crowd, the backbone of the “digital distribution” argument was actually all along considering the centrality of importance in paying the artists.  It was only the RIAA who cast the events as doom and criminality run rampant.  Does it matter?  The goal is to dominate the industry, and put the fear of piracy into the artists, for today, the RIAA is not really trying to convince pirates to stop, or governments to intercede; the goal of the RIAA is to prove to artists that they “need” the RIAA as middle-men in the distribution of artistic works (nine inch nails, Radiohead, and countless other artists have shown this to be untrue, by creating their own channels and avenues of access to their fans).  Which is why, hopefully, people will begin to ignore the whining of floundering industries providing pure simple enjoyment, and entertainment, and look at the issues which impact the very academic structures of our modern societies.  After the pointless squabbling and bickering of the content industry, we may begin to see the new landscape which is faced by modern academics and scholars.  The subsequent section will be dedicated to these academic concerns.

A recent article featured in Atlantic Monthly by Julian Fisher, MD, a Boston-based neurologist and medical information entrepreneur as he brings out in the open some of the sad realities of the academic publishing world, (not the big ugly shiny rhetoric of the anti-intellectualism of the various arms of the political Right wing), but rather he examines the “pro-business” focus of Governments, and the centralization obsessed bigger academic institutions which now face the same questions faced by the music industry a decade ago; why are people impeding access, inflating costs, and making it increasingly difficult for people who actually have paid for access; why are taxpayers paying multiple times over for simple access to vital information, which is needed to make informed decisions in the increasingly complex world, to clarify, if a voter is going to have to select which politician will make the right choice in terms of the nuances of access to complex information, and modern tools, how can those important political decisions be made without easy access to original information.  What we end up with is that on complex, modern, and high-tech wedge issues of today (abortion [when life begins], euthanasia [who gets to decide when it is over], a thousand other “bodily autonomy” cases which simply have not been tested yet in the legal system, or network neutrality [how many layers of “middle men” get to exist, who pays, and is it logical to segregate network traffic based on arbitrary source decisions]).

Illnesses that public figures have are much in the news of late — Ronald Reagan and his Alzheimer’s disease the most noteworthy — and I recently came across a brief description in a neurology journal of a medical problem that Franklin D. Roosevelt began to experience as he looked toward his fourth term — brief episodes of confusion that presumably represented epilepsy.  These symptoms were thoughtfully explained in the article, written by a neurologist, Steven Lomazow, who has co-written a book on the subject.

But for any of you to read that article would cost you dearly. Why? The dirty little secret of scholarly publishing.

What are the costs in this new Internet age? As you might suspect, they have plummeted (an article I wrote several years ago here [] is helpful), to roughly 1/100 of what they were if you produce the article as an electronic document only rather than in print.  Print is no longer necessary or even desired.  Why, then, the $30-$50 financial firewall that you need to pay to see the article I want to show you?  In part, tradition.  In part, publishers keep doing what they do and the scholars do not complain much, since their subscriptions come through their grants or university libraries.  But the libraries complain, individuals like all of you reading this should complain, and everyone in the developing world complains.

There are some initiatives to change this situation.  The National Institutes of Health now insist that research they fund, when published, must be made available somewhere at no cost.  Some journals are made available online selectively to lesser developed nations.  But there is no mad dash to change the system, even with the open-source software that supports the online publishing process and even multi-site synchronized archiving.  The traditional publishers continue to make their traditional profits, and I still cannot show you the article.  But you can buy the book about FDR for 1/10 of the cost of the article.  Now isn’t that a great idea?

So; from all this mess, what are alternatives?  Creative Commons is one manner.  There is a formula to help conceive of where the value comes from a Creative Commons conception regime; “Connect With Fans (CwF) + Reason To Buy (RtB) = The Business Model ($$$$)”.  This is the proposal advanced in this article, “Sharing With Creative Commons: A Business Model for Content Creators”, by Cheryl Foong, Queensland University of Technology, Australia. Sure, one says, that all sounds cute, but, in the real world, people are assumed to be criminal thieves… not quite so.

New business models are not limited to the music industry. Sooner or later, new business models will emerge in most creative industries where content can be enjoyed in digital form (e.g. books,[78] magazines,[79] news,[80] documentaries,[81] illustrations and images,[82] or films).

The following are four case studies on the integration of CC licensing into film production and distribution businesses. In particular, these case studies illustrate the differences between the use of CC by relatively unknown film producers (the creators behind the films Cafuné (2005)and Star Wreck (2005) respectively) and its use by major film studios (Kiss Kiss Bang Bang (2005)by Warner Brothers and Two Fists One Heart (2008) by Disney).

New models do exist, just as valid new ideas will persist, allowing the large industries to shape the discourse, and focus on the “freeloaders” while legitimate uses and innovative models are deprecated and slandered simply for sharing basal conceptions and outward appearances with the freeloader model (as in, both involve digital distribution) harms the artists who have no fans yet, as well as the societies they come from, and yet also the big name super stars.  Such broken discourses harm all the players; the industry was just too myopic to see the writing on the wall, the bells sounding a change in models, and they have paid for their blindness, and use of laws to “force” an old, and broken model on Western societies.

The issue of copyright infringement on the scale of databases such as Napster has taken many by surprise. With the emergence of greater digital and networking technology, it seems that large-scale databases of music, copyright infringed or not, was an inevitable consequence. The ungoverned global nature of the Internet is hampering the ability for governments, organizations and individuals, who feel that their copyright has been infringed, to stop such databases. In fact a feature of the Internet is that it changes very quickly, and has an undercurrent of circumvention of traditional rules.  The impact of the inevitable revolution on the three stakeholder groups was discussed. It can be concluded that for the user, a question of ethical standards is raised. Also this group demand a simple solution for contributing to copyright royalties, otherwise they will take the easy option — free material. The conclusion for the music artists is that they have a new opportunity for expanding their audience, and gaining direct contact with them. However, retaining control over intellectual property will be a challenge. For the music industry, the potential for a moneymaking business model is available. However, their market is changing, and a challenge is going to be the need for a different approach to copyright.


Today the music industry is a shadow of the past, and who swept in to usurp them?  The digital distribution models, which the industry demonized, and spread name-calling and rhetorical flourishes across “respected” news media (who happily played along; as they were next, after digital music, digitally distributed news models came next (in the vein of Fox News Corp’s “The Daily” on the ipad), but those issues would require another similar length essay, as they fought tree and bark to “force” maintenance of the “dead-tree” models).  People (consumers) managed to force that business model to change too.  Today digital access has surpassed print distribution, and many print publishers have collapsed under the weight of their ancient (broken) model.

Resources Cited:

“Large-scale Copyright Infringement: the Inevitable Consequence of the Digital Age”; Lynette White and Sean Elliott, Melbourne School of Engineering, Department of Computer Science and Software Engineering.

Napster’s interim CEO Hank Barry addressed congress April 3, 2001 following the court injunction of a month prior; University of Florida; Interactive Media Labs Projects


“RIAA, Resources for Students; online information access point”: RIAA Homepage.


“Tales from the Public Domain:
BOUND BY LAW?”: Duke Law Comic Publications;

“Read This Academic Journal Article, but Prepare to Pay”: Atlantic Monthly by Julian Fisher, Feb 22 2011,

“Sharing With Creative Commons: A Business Model for Content Creators”, by Cheryl Foong Queensland University of Technology, Australia.

Technical report: An Estimate of Infringing Use of the Internet, January 2011; Envisional Studies. –

Appendix 2:


Appendix 3:

What do people use the internet for?

From Digital Secrets.

The Next Grand Challenge; making reliable information extraction toolsets (aka WATSON)

IBM Centennial Film: They Were There - People who changed the way the world works

IBM – Expert interviews: Healthcare:

Building Watson – A Brief Overview of the DeepQA Project

Continue reading