• The Real “Trouble” With Technology, Online Education And Learning

    It’s probably too early to say whether Massive Open Online Courses (MOOCs) are a "tsunami" or a "seismic shift," but, continuing with the natural disaster theme, the last few months have seen a massive “avalanche” of press commentary about them, especially within the last few days.

    Also getting lots of press attention (though not as much right now) is Adaptive/Personalized Learning. Both innovations seem to fascinate us, but probably for different reasons, since they are so fundamentally different at their cores. Personalized Learning, like more traditional concepts of education, places the individual at the center. With MOOCs, groups and social interaction take center stage and learning becomes a collective enterprise.

    This post elaborates on this distinction, but also points to a recent blurring of the lines between the two – a development that could be troubling.

    But, first things first: What is Personalized/Adaptive Learning, what are MOOCs, and why are they different?

  • The Unfortunate Truth About This Year's NYC Charter School Test Results

    There have now been several stories in the New York news media about New York City’s charter schools’ “gains” on this year’s state tests (see hereherehere, here and here). All of them trumpeted the 3-7 percentage point increase in proficiency among the city’s charter students, compared with the 2-3 point increase among their counterparts in regular public schools. The consensus: Charters performed fantastically well this year.

    In fact, the NY Daily News asserted that the "clear lesson" from the data is that "public school administrators must gain the flexibility enjoyed by charter leaders," and "adopt [their] single-minded focus on achievement." For his part, Mayor Michael Bloomberg claimed that the scores are evidence that the city should expand its charter sector.

    All of this reflects a fundamental misunderstanding of how to interpret testing data, one that is frankly a little frightening to find among experienced reporters and elected officials.

  • What Florida's School Grades Measure, And What They Don't

    A while back, I argued that Florida's school grading system, due mostly to its choice of measures, does a poor job of gauging school performance per se. The short version is that the ratings are, to a degree unsurpassed by most other states' systems, driven by absolute performance measures (how highly students score), rather than growth (whether students make progress). Since more advantaged students tend to score more highly on tests when they enter the school system, schools are largely being judged not on the quality of instruction they provide, but rather on the characteristics of the students they serve.

    New results were released a couple of weeks ago. This was highly anticipated, as the state had made controversial changes to the system, most notably the inclusion of non-native English speakers and special education students, which officials claimed they did to increase standards and expectations. In a limited sense, that's true - grades were, on average, lower this year. The problem is that the system uses the same measures as before (including a growth component that is largely redundant with proficiency). All that has changed is the students that are included in them. Thus, to whatever degree the system now reflects higher expectations, it is still for outcomes that schools mostly cannot control.

    I fully acknowledge the political and methodological difficulties in designing these systems, and I do think Florida's grades, though exceedingly crude, might be useful for some purposes. But they should not, in my view, be used for high-stakes decisions such as closure, and the public should understand that they don't tell you much about the actual effectiveness of schools. Let’s take a very quick look at the new round of ratings, this time using schools instead of districts (I looked at the latter in my previous post about last year's results).

  • How Often Do Proficiency Rates And Average Scores Move In Different Directions?

    New York State is set to release its annual testing data today. Throughout the state, and especially in New York City, we will hear a lot about changes in school and district proficiency rates. The rates themselves have advantages – they are easy to understand, comparable across grades and reflect a standards-based goal. But they also suffer severe weaknesses, such as their sensitivity to where the bar is set and the fact that proficiency rates and the actual scores upon which they’re based can paint very different pictures of student performance, both in a given year as well as over time. I’ve discussed this latter issue before in the NYC context (and elsewhere), but I’d like to revisit it quickly.

    Proficiency rates can only tell you how many students scored above a certain line; they are completely uninformative as to how far above or below that line the scores might be. Consider a hypothetical example: A student who is rated as proficient in year one might make large gains in his or her score in year two, but this would not be reflected in the proficiency rate for his or her school – in both years, the student would just be coded as “proficient” (the same goes for large decreases that do not “cross the line”). As a result, across a group of students, the average score could go up or down while proficiency rates remained flat or moved in the opposite direction. Things are even messier when data are cross-sectional (as public data lmost always are), since you’re comparing two different groups of students (see this very recent NYC IBO report).

    Let’s take a rough look at how frequently rates and scores diverge in New York City.

  • Examining Principal Turnover

    Our guest author today is Ed Fuller, Associate Professor in the Education Leadership Department at Penn State University. He is also the Director of the Center for Evaluation and Education Policy Analysis as well as the Associate Director for Policy of the University Council for Educational Administration.

    “No one knows who I am," exclaimed a senior in a high-poverty, predominantly minority and low-performing high school in the Austin area. She explained, “I have been at this school four years and had four principals and six algebra I teachers."

    Elsewhere in Texas, the first school to be closed by the state for low performance was Johnston High School, which was led by 13 principals in the 11 years preceding closure. The school also had a teacher turnover rate greater than 25 percent for almost all of the years and greater than 30 percent for 7 of the years.

    While the above examples are rather extreme cases, they do underscore two interconnected issues – teacher and principal turnover - that often plague low-performing schools and, in the case of principal turnover, afflict a wide range of schools regardless of performance or school demographics.

  • A Chance To Help Build Grassroots Democracy In China

    Our guest author today is Han Dongfang, director of China Labor Bulletin. You can follow him on Weibo in Chinese and on Twitter in English and Chinese. This article originally appeared on the China Labor Bulletin, and has been reprinted with permission of the author.

    The first of February this year was a historic day in the Chinese village of Wukan. Several thousand villagers, who had chased out their corrupt old leaders, went to the polls to democratically elect new representatives. A few months later, on 27 May, there was another equally historic democratic election in a factory in nearby Shenzhen, when nearly 800 employees went to the polls to elect their new trade union representatives. These two elections, one in the countryside, the other in the workplace, both represent important milestones on the road towards genuine grassroots democracy in China.

    Just like in Wukan, the Shenzhen election came about a few months after a mass protest at the ineptitude of the incumbent leadership. The workers at the Omron electronics factory staged a strike on 29 March demanding higher pay and better benefits and, crucially, democratic elections for a new trade union chairman.

  • Cheating In Online Courses

    Our guest author today is Dan Ariely, James B Duke Professor of Psychology and Behavioral Economics at Duke University, and author of the book The Honest Truth About Dishonesty (published by Harper Collins in June 2012).

    A recent article in The Chronicle of Higher Education suggests that students cheat more in online than in face-to-face classes. The article tells the story of Bob Smith (not his real name, obviously), who was a student in an online science course.  Bob logged in once a week for half an hour in order to take a quiz. He didn’t read a word of his textbook, didn’t participate in discussions, and still he got an A. Bob pulled this off, he explained, with the help of a collaborative cheating effort. Interestingly, Bob is enrolled at a public university in the U.S., and claims to work diligently in all his other (classroom) courses. He doesn’t cheat in those courses, he explains, but with a busy work and school schedule, the easy A is too tempting to pass up.

    Bob’s online cheating methods deserve some attention. He is representative of a population of students that have striven to keep up with their instructor’s efforts to prevent cheating online. The tests were designed in a way that made cheating more difficult, including limited time to take the test, and randomized questions from a large test bank (so that no two students took the exact same test).

  • Low-Income Students In The CREDO Charter School Study

    A recent Economist article on charter schools, though slightly more nuanced than most mainstream media treatments of the charter evidence, contains a very common, somewhat misleading argument that I’d like to address quickly. It’s about the findings of the so-called "CREDO study," the important (albeit over-cited) 2009 national comparison of student achievement in charter and regular public schools in 16 states.

    Specifically, the article asserts that the CREDO analysis, which finds a statistically discernible but very small negative impact of charters overall (with wide underlying variation), also finds a significant positive effect among low-income students. This leads the Economist to conclude that the entire CREDO study “has been misinterpreted," because it’s real value is in showing that “the children who most need charters have been served well."

    Whether or not an intervention affects outcomes among subgroups of students is obviously important (though one has hardly "misinterpreted" a study by focusing on its overall results). And CREDO does indeed find a statistically significant, positive test-based impact of charters on low-income students, vis-à-vis their counterparts in regular public schools. However, as discussed here (and in countless textbooks and methods courses), statistical significance only means we can be confident that the difference is non-zero (it cannot be chalked up to random fluctuation). Significant differences are often not large enough to be practically meaningful.

    And this is certainly the case with CREDO and low-income students.

  • The Data Are In: Experiments In Policy Are Worth It

    Our guest author today is David Dunning, professor of psychology at Cornell University, and a fellow of both the American Psychological Society and the American Psychological Association. 

    When I was a younger academic, I often taught a class on research methods in the behavioral sciences. On the first day of that class, I took as my mission to teach students only one thing—that conducting research in the behavioral sciences ages a person. I meant that in two ways. First, conducting research is humbling and frustrating. I cannot count the number of pet ideas I have had through the years, all of them beloved, that have gone to die in the laboratory at the hands of data unwilling to verify them.

    But, second, there is another, more positive way in which research ages a person. At times, data come back and verify a cherished idea, or even reveal a more provocative or valuable one that no one has never expected. It is a heady experience in those moments for the researcher to know something that perhaps no one else knows, to be wiser—more aged if you will—in a small corner of the human experience that he or she cares about deeply.

  • Share My Lesson: The Imperative Of Our Profession

    Leo Casey, UFT vice president for academic high schools, will succeed Eugenia Kemble as executive director of the Albert Shanker Institute, effective this fall.

    "You want me to teach this stuff, but I don't have the stuff to teach." So opens "Lost at Sea: New Teachers' Experiences with Curriculum and Assessment," a 2002 paper by Harvard University researchers about the plight of new teachers trying to learn the craft of teaching in the face of insubstantial curriculum frameworks and inadequate instructional materials.

    David Kauffman, Susan Moore Johnson and colleagues interviewed a diverse collection of first- and second-year teachers in Massachusetts who reported that, despite state academic standards widely acknowledged to be some of the best in the country, they received “little or no guidance about what to teach or how to teach it. Left to their own devices they struggled day to day to prepare content and materials. The standards and accountability environment created a sense of urgency for these teachers but did not provide them with the support they needed."

    I found myself thinking about this recently when I realized that, with the advent of the Common Core State Standards, new teachers won’t be the only ones in this boat. Much of the country is on a fast-track toward implementation, but with little thought about how to provide teachers with the “stuff” – aligned professional development, curriculum frameworks, model lesson plans, quality student materials, formative assessments, and so on – that they will need to implement the standards well.