Dissecting the PARCC Propaganda

By Lynn Fedele

The Partnership for Assessment of Readiness for College and Careers (PARCC) website offers a vast array of information about this new, untried assessment system, and it is designed to sell the PARCC assessments to parents and educators on all levels. As the PARCC is tied to the Common Core State Standards (CCSS), the website does a good job of promoting those standards, too. Published within a handbook for “State and District Leaders” in implementing the PARCC and the CCSS is chapter 4, titled “Organize to Implement: Getting the Message Out.” It is a 14-page long public relations manual, replete with charts and graphs and all sorts of useful suggestions.

The Montclair Schools have had an interesting relationship with the notion and function of public relations over the past year, and this year Central Services has hired its own public relations professional. While Mr. Frankel is indeed busy in selling the notion that all changes coming from the state and through superintendent MacCormack are working wonderfully, the public relations efforts to support the CCSS and the PARCC go beyond one man’s effort and employment with the school district. A close look at the PARCC’s chapter on organizing public support is telling; while any public relations work needs to be tailored to local populations and concerns, there is a good deal in the handbook that resonates loudly and clearly here in Montclair.

So let’s take a look at what the PARCC suggests.

2014 068

The chapter begins this way:

“One risk faced by any change effort is ‘undercommunicating by a factor of 10, or even 100.’ The communications effort should receive the same amount of effort as the implementation effort.” While this may seem innocuous enough, the implementation effort includes redesigning curriculum, updating technology, ordering and adapting new materials, and accelerating the level of instruction (frequently beyond grade level). These efforts have taken countless hours, a lot of money, and have upended many of our classrooms, not necessarily for the better.  That public relations should take as much time and effort is troubling. There’s an old cliche, “A good idea sells itself.” This opening statement seems like an admission that this is not a good idea and that the community will take a lot of convincing to get on board.
 *
Their first suggestion? It’s the first section, “Build a Base of Support by Establishing a ‘Guiding Coalition.'”
*
First, they address why this has to happen:
“Flagging public support can push implementation off the rails. Pressure to water down student expectations may build, for example, once new assessment results show that students are not as prepared as once believed.”
The students are being assessed by a system that is largely untried. They are being assessed on standards that are being found to be frequently developmentally inappropriate. It’s not a matter of our students being under-educated, or our schools being underdeveloped, or of our teachers not having high expectations for learning; it is a matter of the PARCC being an unfair test.  What PARCC characterizes as “pressure to water down student expectations” is the joined voices of teachers, parents, education specialists, and students themselves saying that curriculum and assessment need to be fair, to be developmentally appropriate, to be focused on creating skills for life-long learning; these are voices saying clearly that demanding too much too fast is damaging  to children.
 *
The section continues, “Inevitably, state and district leaders need help in keeping rigorous expectations for students at the heart of their agenda. Though the strategic implementation team (district administrative employees responsible for getting the schools ready to administer the test) plays a key role in supporting their agenda, a small group of highly visible and credible leaders are needed to sustain effort in the face of pushback.” This small group is, of course, the guiding coalition. Locally, who do we have to fit this bill? Will the SATPs be further co-opted? The Achievement Gap Panel? Members of the Board of Ed? Time will tell. Certainly, spending public tax money on a P.R. consultant will help, too.
*
Of course, publicly disparaging those who oppose the CCSS and PARCC is part and parcel of local corporate-reform efforts. Teachers are speaking out? It’s all too easy to paint teachers as lazy or, in Dr. MacCormack’s words at a Board of Ed meeting last year, “afraid of change.” Parents are concerned? Members of Montclair Cares About Schools are “uncivil” or are a fringe group or are engaging in personal attacks. Of course teachers and parents want children to learn; it’s silly to think otherwise. But it’s also silly to think that by echoing the word “rigor” proponents of PARCC and CCSS are actually advocating what is best for students.
 *
But what is best for students is not the focus of the P.R. chapter, and what comes next is disconcerting. According to PARCC: “The role of this ‘guiding coalition’ is to remove bureaucratic barriers to change, exert influence at key moments to support implementation and offer counsel to the strategic implementation team.”  Many of the “barriers to change” are actual democratic structures put into place to keep the public schools accountable to the public. How much more quickly would Dr. MacCormack ‘s and the corporate reformers’ changes be in place if they were not open to public scrutiny? And upon whom should the guiding coalition “exert influence”? What kind of “influence” are they implying with this? The chapter does not clarify.
 *
What the chapter does clarify, though, is how to keep the propaganda manageable.  The handbook, in referencing pushback, states “The best way to ensure that this does not happen is to play offense — make sure your messages and goals reach key audiences first and are regularly reinforced by credible messengers. In fact, don’t be afraid to communicate even if your implementation plan is in flux.”  This need for constant top-down message communication, even in the midst of “flux,” could explain the discrepancies in information this year. For instance, parents were given a list of PARCC skills that are being covered in the middle schools while the middle school teachers were not informed of their responsibility to teach these skills. Parents and teachers have been told there will be no test prep. In fact, in her posting on the Montclair Board of Education website “Why PARCC,” Dr. MacCormack writes, “As superintendent, I set the tone for how these tests will be interpreted in the district, regardless of the State’s mandates, and I will not support teaching to the test.” Yet some schools are giving practice PARCC tests. One administrator told a roomful of parents that a paper version of the PARCC will be available while other administrators are saying this will not be. The district’s powerpoint presentation to parents about the PARCC contains discrepancies about the time it will take to administer the test.
 *
Yet there are other areas of manageability that the district seems to be adhering to quite well, in particular the simplicity of the overall message itself. The PARCC recommends developing and repeating “three key  messages” and instructs “Repeat, repeat, repeat these messages across all communication channels and by all public messengers.” They even give suggested messages, including:
 *
PARCC: “State standards and assessments have historically been set too low, offering an inaccurate view of how well our students are actually achieving.” (Dr. MacCormack paraphrased this at a Board of Ed meeting earlier this year, and in “Why PARCC” she writes “Problems with the NJ ASK were numerous. For one, NJ ASK was not well aligned with classroom instruction, and therefore many teachers spent time doing test preparation. Turnaround of test results was slow and provided few concrete insights into student learning.” This comes despite the fact that the New Jersey standards have been proven to be among the nation’s best.)
 *
PARCC: “The Common Core State Standards and aligned common assessments are more rigorous than what we have [had] in place… and will provide an honest picture of how well our students, schools and system are achieving on the most critical knowledge and skills.” (Data, anyone? Dr. MacCormack writes in “Why PARCC,” “In addition, data gathered from assessment tests can also guide educators toward improving classroom instruction and foster the sharing of best practices for teaching our students.This is the ideal, but for more than a decade, under the State of New Jersey’s NJ ASK testing program, the ideal has been too far out of reach.”)
 *
PARCC: “Implementing the Common Core State Standards is a critical step toward ensuring that all students receive the true education they need for success in life.” (Last year, Dr. MacCormack and several Board of Ed members touted the CCSS as a means of addressing the achievement gap. In “Why PARCC,” Dr. MacCormack writes, “the NJ ASK offered limited measures of a student’s critical-thinking and problem-solving skills, key indicators of a student’s future success.
PARCC assessments, on the other hand, are said to provide a more thorough examination of student development than prior tests, with more writing and greater focus on critical thinking and problem-solving skills.”)
 *
What is happening in Montclair is part of a national agenda — PARCC even uses the word “agenda.”  They’ve published their playbook; we don’t have one. We can only speak the truth.
*
Permalink: https://montclaireducationmatters.com/2014/12/14/dissecting-the-parcc-propaganda/

Myths and Truths of Teacher Evals

AchieveNJ and Teacher Evaluations

The new evaluation system implemented by the state of New Jersey is serious, and educators across the state are feeling incredible pressure.  Myths abound as to the focus, purpose, and effectiveness of the evaluations, and the truth is as complicated as the new system itself. Many in the public arena interpret teachers’ criticisms of the new process as whining, and some go as far as to accuse teachers of resisting evaluation on the whole, insinuating if not stating that teachers do not want to be assessed because they fear being found incompetent. But it’s easy to make such broad statements when you don’t have the facts. There are a lot of myths going around – so what is the truth?

Myth: Teachers do not want to be evaluated.

Truth: Not true. Teachers always have been evaluated. We’re very accustomed to it. Having an administrator observe a class is nothing new, and many teachers welcome the opportunity to introduce principals and supervisors to the wonderful things that students do in the classroom.

But we want the evaluations to be fair. The criteria for what needs to be included within a teacher evaluation has changed since the advent of Race to the Top, and New Jersey has answered by developing AchieveNJ.  This is a highly complex system that includes classroom observations, Student Growth Objectives (SGOs), and Student Growth Percentiles (SGPs). While this seems fine on the surface, problems can arise if any administration chooses to use the observation portion as retaliation against teachers for practicing free speech or when the state introduces SGPs aligned with brand new tests (PARCC) that have never been used before and about which there is no information about accuracy or reliability.

Myth: Classroom observation criteria are fair and objective.

Truth: Well, that largely depends upon how one defines “fair,” and observations are always subjective, which is not necessarily a bad thing. There are 5 observation rubrics available to all districts in NJ, and Montclair is using the Marshall Rubric. There is too much to summarize here, which implies that there is too much. Read the rubric; how much of a difference is there between earning a 3 or 4 in most categories? Between a 1 and a 2? Look at the number of categories. How can any teacher show all of this during any given observation? Can any administrator see all of this? If any administrator in any part of the state wanted to target a teacher, he or she could easily do so, as teachers can defend themselves on evaluation procedures but not on evaluation ratings on the rubrics. Whatever an administrator says, goes.

Myth: No administrator would be so vindictive as to use evaluations to target specific teachers.

Truth: Yes, some would. While we have yet to see a case brought to public attention through the courts in Montclair, it has happened in Newark. Read about it here.

Myth: SGPs are a fair way to use data.

Truth: SGPs, which are a form of Value Added Measurement (VAM), are student test data used in a formula to impact a teacher’s overall rating. But using VAMs is a complicated and expensive process, one that is known to be harmful to schools in many ways if not done properly. (See links to articles about VAMs on our Articles page.) According to the AchieveNJ website, “SGP is a measure of how much a student improves his or her NJ ASK [PARCC] score from one year to the next compared to students across the state who had a similar test score history.” [There is no PARCC history. How can this count for 10% this year?] The state then takes the median test score of a teacher’s students to use. There are many factors that affect student test scores – from home environment to socioeconomic status to whether or not the student ate breakfast on the day of the test, to name a few – but these are not taken into account.

Myth: SGOs help teachers focus their instruction on standards.

Truth: Instruction is, as mandated by the state, centered on standards. It has been for many years, since New Jersey implemented the Core Curriculum Content Standards (CCCS) in 1996, so the assertion of the NJ DOE that SGOs will help teachers focus on standards is untrue – we already do. Since the adoption of the Common Core State Standards (which replaced the CCCS for ELA and Math), instruction has been aligned with these as well. Evaluation requirements demand a teacher to focus on one standard for each SGO and to complete a fairly long process of documentation. The issue is that the standards, again, are already being addressed. The SGO requirement does not cause teachers to bring in new methods or learn new content or strategies; it does cause teachers to spend a lot of time documenting what is already documented through our lesson plans, which are checked by the administration regularly. Because SGOs must document student growth, teachers must establish a baseline at the beginning of the school year; this is why teachers are now giving tests and assessments in September that they know the students will not do well on. If we have not yet taught a skill or concept, of course the pre-assessment scores will be low. We do not want to waste our time or the children’s time with these pre-assessments, but we have no choice.

Myth: Teachers getting bad evaluations must be bad teachers.

Truth: Not at all. In NJ and in other states that have adopted these evaluations, very good teachers are being put on probation and let go. Think of the teachers who volunteer each year to work with the students who struggle the most. No matter their effort, if test scores are low, they are to blame. Teachers who work with English Language Learners and Special Needs students are also at greater risk for their SGPs and SGOs coming in with low scores. Test scores also reflect student placement. If an administrator schedules a teacher  with many overcrowded classes filled with students with low skill levels, that teacher will do poorly on the SGP (and possibly in the observation as well). Under the guidelines of AchieveNJ, two consecutive ”ineffective” ratings or one “ineffective” and one “partially effective” rating will trigger tenure action by the state DOE. This means that the state can revoke teachers’ tenure even if their administrators want the teachers to be retained; while a district can challenge the state’s tenure revocation, it is limited in what it can do.

There is nothing, not one thing, in the evaluation system that takes into account how factors outside the school affect student performance.

Myth: You’re whining.

Truth: We’re not. Our jobs are at stake.

permalink: https://montclaireducationmatters.com/2014/11/23/myths-and-truths-of-teacher-evals/

PARCC: Pearson’s Weapon of Choice

By John Wodnick

For over twenty years, I’ve taught high school English, and I’ve always tried to make my classroom a place where students might experience the joy of intellectual exploration and discovery.  I teach literature because I have felt the transformative power of great novels, plays and poems on my own consciousness, and I’m eager to give young students that same inspiring experience.  Such experiences are slowly but surely being rooted out of our current educational system, mainly because they’re difficult to measure, and the PARCC is just the latest and most potent weapon yet designed to eliminate them, mainly because they can’t be monetized.

 

Taking the PARCC on Sunday here in Montclair in the company of many other thoughtful adults, I experienced the confining and artificial nature of trying to read literature closely in the context of standardized multiple-choice testing.  What I figured out in the course of this experience is that the PARCC’s main value is to create more market share for itself.  It certainly isn’t to inspire in students any great love of literature, or to get them to think very deeply about the world they live in.  This is because it is a measurement tool, and not an educational tool, and the manic desire to measure every aspect of learning is, sadly and ironically, depriving students of much of what makes learning valuable.

 Nishuane2

This mania for measurement has political and economic consequences, as well as educational ones.  The more we measure a school’s success by its standardized test scores, the more we disempower the community that school serves, disempower the educators serving that community, and ultimately, harm the students we’re trying to serve.  Measuring educational success through test scores is anti-democratic, and anti-student, and anti-parent.  It fosters an attitude of distrust between administrators and teachers and pushes all decision-making authority upwards towards a centralized power, often one that resides outside the district.  You can read heartbreaking stories about how this process is playing out in Newark by reading Bob Braun’s Ledger, or by following the facebook page of the Newark Students’ Union, or even by reading the national coverage the situation in Newark recently received on salon.com.

 

Those at the top of the power structure these tests help to preserve use many strategies to maintain their authority.  Questions are perceived as threats, and these threats are eliminated insidiously, by reframing the debate in ways that marginalize them.  Skeptics about the value of standardized tests are labeled as being against academic rigor.  Those who wish to maintain democratic control over their local districts are dismissed as rabble-rousing radicals.  Marketers learned these tricks long ago; politicians understand them.  They are very powerful and very effective– they just aren’t all that worthwhile if your goal is to create profound learning experiences that make great classrooms and great teachers memorable to real individual students.  In seeking to root out mostly mythical bad teachers, those who use tests to control education are also rooting out greatness, risk-taking, adventure, and the inspiring experience of discovery.  They are enforcing mediocrity for the sake of making outcomes easy to measure.  Tests imposed from above are self-perpetuating and self-justifying devices of social control.  Why must we test?  Because we need to make sure students are doing well on tests.  This is not education.  But it does ensure that gullible districts who think that only measurable outcomes matter will become great customers for the makers of the PARCC.

 

Don’t believe that this is about profits?  Don’t take it from me– here’s Glen Moreno, Chairman of Pearson, quoted directly from Pearson’s own 2013 annual report: “As the world’s leading learning company we are in an increasingly strong position to take advantage of this demand and deliver products and services that measurably improve learning outcomes for our customers and learners. I am also confident that this will positively impact shareholder value.” Measurably improving learning outcomes means testing testing testing, and that means positively impacting shareholder value.

 

Ultimately, what the designers of the PARCC are proposing is to replace many hours of valuable class time with many hours of oppressive testing.  They are eliminating countless hours where students might be encouraged to confront deep questions about their own existence or discuss with peers the social and political issues raised by the literature they’re grappling with together, and replacing them not only with hours spent on the tests themselves, but also hours spent on preparations for those tests.  This deprives students of crucial educational experiences, and makes it more and more difficult to teach them well.

 

PARCC does this by seeking to narrowly redefine educational success for all classrooms and all districts as success on this one test.  Teachers who wish to inspire, to connect, to move their students forward in terms of their relationships with their communities and their understanding of their place in the universe are looked on with suspicion while those who can develop flashy ways to drill students into mastery of relatively simple skills are lionized.  What does not immediately and obviously improve test scores is scrutinized, while any classroom activity that serves those scores is glorified.  We have to ask– who is served by this?  Are students served by this, or are the test-makers?  Jersey Jazzman, a favorite blogger of mine, has a pretty thorough answer here.

 

So, if students’ mastery of PARCC-imposed skills is not a true measure of a successful school, or a successful education, what is?  Schools that are truly democratic in nature help students imagine a better future not only for themselves but for their larger community, and the education they offer favors critical engagement and inspiration.  A powerful democratic education involves experiences of discovery in collaboration with classmates, a celebration of creativity and insight achieved through the mastery of coherent subjects explored and examined with autonomous, trusted and energized mentor teachers.  Contrast this vision with the world imagined by PARCC, which favors the mastery of discrete skills through constant individualized monitoring and submission to a testing regime it is uncivil to question, where success has only one measure– what is your number?

 

Which is the sort of education you want for your child?  Which do you think a curriculum driven by testing will achieve?

https://montclaireducationmatters.com/2014/11/13/parcc-pearsons-weapon-of-choice/

Delran EA Knocks it Out of the PARCC!

The Delran Education Association has published a phenomenal statement about their opposition to standardized testing and the damages that are being wrought upon the New Jersey public schools. From an analysis of why they oppose high stakes testing, to a history of the testing movement, to the negative effects this has on students and teachers, their annotated statement covers all the bases eloquently and forcefully.

Read the full statement here:

https://teacherbiz.wordpress.com/2014/11/10/the-delran-education-associations-position-on-high-stakes-standardized-testing/

(November 11, 2014)

We applaud these brave teachers for making so bold and so necessary a public statement!

permalink: https://montclaireducationmatters.com/2014/11/11/delran-ea-knocks-it-out-of-the-parcc/

NJ Teachers Dream of Finland

At the NJEA convention this past Thursday, Pasi Sahlberg, an education policy advisor from Finland, delivered a keynote address that left the room full of New Jersey teachers both envious and hopeful. Finland has garnered international attention for its public education system for consistently scoring the highest on the international PISA tests – but high test scores are not the cause of envy. Instead, he spoke of the respect for children and educators that lies at the heart of their public school system, the focus on cooperation and the rejection of education as competition, and the true meaning of equity. He left the attendees feeling hopeful because he demonstrated what a society can do when it is determined to educate all children to the best of their abilities in the hopes that they will create good lives. For two good explanations of his speech, see the links below.

From teacherbiz: http://teacherbiz.wordpress.com/

From The Press of Atlantic City: http://www.pressofatlanticcity.com/news/new-jersey-teachers-learn-secrets-of-finland-s-academic-success/article_d3c4715c-6611-11e4-a38a-fbcb6a002cb5.html

What’s Wrong with the Core? High School Reading

By Lynn Fedele

As a high school English teacher (not in Montclair), my professional life has been consumed by the Common Core State Standards (CCSS), and I am all too familiar with the high school level English Language Arts standards. As they are currently the law of the land, I am implementing them in my classroom, and so I witness them in action on a daily basis.

The CCSS were designed – not by practicing educators, by the way – from the top down, meaning that their creators started with the upper levels of the high school standards and then reversed engineered them down to the early elementary grades. That said, as I teach seniors, I am among the lucky; the CCSS are more closely aligned with what I have been teaching and are more developmentally appropriate for my students than many of the standards are for younger children. Nonetheless, they leave much to be desired.

In short, the standards are not out to make life-long readers and thinkers; they are out to train students to think about what other people have to say and to think more about how it is said than what it means.

Bradford 1

Last year, I was directed by my administration to list the standards on the board that my class was meant to cover that day, which is not an unusual practice. But I balked. Language arts skills are not taught in isolation; the skills overlap and are recursive in nature, meaning that we teach and re-teach the same skills at increasing levels of difficulty, often teaching several skills simultaneously. On any given day, students may read a passage, discuss it in groups, and then write about it. This involves reading skills, speaking and listening skills, language skills, and writing skills. So instead of taking five to ten minutes each period, every day, to write the standards, I created a few posters and hung them up over the board in my classroom. For the sake of space, I condensed them.

The 11-12 Reading Literature standards, in essence, call for this:

1) Cite strong and thorough textual evidence

2) Determine two or more central ideas of a text

3) Analyze the author’s choices in presentation

4) Determine the meaning of words and phrases in a text

5) Analyze the effectiveness of the structure of a text in creating meaning

6) Determine what the text implies rather than states

7) Analyze multiple interpretations of a story

8) [not applicable to Lit]

9) Analyze 18th, 19th, and 20th century American works

10) Read complex texts on grade level

And that’s when it became obvious to me that there is so much missing.

The standards seem reasonable when looked at in isolation. In other words, we can pick out any one standard and it will name something high school seniors should do: they should cite evidence; they should understand vocabulary. That’s fine.

So what’s missing from the Core?

First, what’s missing is context. The CCSS assume that each text is a discreet entity. Students are not required to think of their pre-existing knowledge in any given field in order to integrate new information. They are not required to do any background research to complement and extend understanding. They are not required to draw connections between disciplines – just never mind what a book might imply about history or philosophy. Even when comparing texts on the same subject, students do not need to understand why the texts differ – how historical/political/socioeconomic/race/sex/gender/culture/identity issues affect the content – just how they differ (one gives more detail; they have different forms; they use different narrative perspectives).

What else is missing? Oddly enough, for all they banter the word around, at grade 11-12, they are missing “rigor,” which becomes evident when aligned with Bloom’s Taxonomy. Bloom’s Taxonomy – a long-standing holistic learning model – categorizes and explains how children learn. In the cognitive area, there are six levels. The CCSS Reading Literature standards do not call upon the students to use the top two levels of critical thinking: Synthesis and Evaluation. It is in these two levels that students combine information into new, creative wholes and judge what they read and learn. These are the levels in which they challenge their own assumptions and use newly gained knowledge in creating something original. The standards, on the whole, stop at analysis, which is an important step in critical thinking, but not the top.

In essence, the CCSS are missing the reader. While students have to analyze how a text is structured, how its parts work together, how the author makes choices to get his/her point of view across, they have no personal interaction with meaning. They do not have to understand the work’s content as relevant to themselves or to the lives they live. They do not have to make judgments about what they read or bring the content into themselves on any level to complete the tasks outlined by the standards. No agreeing or disagreeing with an author – just explain how the text functions. In sum, no opinions necessary.

And lastly and sadly, readers, the standards suppose, never read for enjoyment.

So, why omit all the best stuff? Because the best aspects of reading and interacting with literature simply cannot be measured on a standardized test.

https://montclaireducationmatters.com/2014/11/02/whats-wrong-with-the-core-high-school-reading/