Thinking of Powers…

If you could only use the numbers 1 to 9, each only once, could you fill in the boxes to make the figure true?


If not, explain why it doesn’t work.  Where exactly in the table do you have problems?  What is the smallest number of adjustments needed to make the table work?

Where our youth go…

I haven’t been on the blog horn or even Twitter that much other than #MSMathChat, and that is because right now I am struggling a bit to keep my head above water this year.  There is a lot going on (and taking Grad classes on top of everything wasn’t a good call).

This post is gonna be short, but it just hit me yesterday and I wanted to get it on my blog so I have it and remember.

This week has been fairly slow, I have had 6 new students come in and 6 leave.  I’m getting used to that, it really makes trying to keep a cohesive classroom hard- but I have been managing.  That’s not the factor that hit me, it’s this one:

70% of my current students are return students.

Normally high school teachers don’t bat an eye at that statistic, in fact they expect a lot higher one, but not when you are teaching in a Juvenile Center.  That number means that even though these students get a grip on their lives while they exist within these walls, they can’t maintain that when they go back home.

As a parent of a 5 and 8 year olds, I am getting more and more sensitive to factors that influence their lives, their behaviors, their choices.  When I talk to my “returning” students, they have ALL told me that they go back home fully intending to keep out of trouble, yet they fall into the same group negatives or they can’t cope with the bad family environment at home.  As a teacher and parent, this makes me immensely sad.


(this is a random picture taken from the WWW, it is not any student that is enrolled  in my school or staff that works at my school)

Our youth make their own choices, but many are too unsure of themselves to be confident to walk their own path individually.  They need their friends and family, and if those happen to be a negative influence on them- returning them to that environment is setting them up for failure.  I don’t know what the correct answers are for this issue, they go beyond my scope of expertise or experience, but I am beginning to believe that in order to truly change things for these students then those outside factors also need to go through the “treatment” processes these children face.

We can’t keep placing them in the same situation and expecting different results.

All children deserve to be loved, and have a safe positive environment to learn and grow in.

What exactly do you mean, MY students?!


High stakes standardized testing, how we all dread those words.  Most of our students aren’t overly fond of them either, no matter what incentives or snacks you throw at them, it is a time where they are evaluated on what they know in conditions that are so unlike their classroom it’s laughable to think that we attempt to measure student knowledge in this manner.

Sitting on committees of these tests and seeing proposed test questions and data from piloted questions is an interesting experience.  If you haven’t had the opportunity to do so, please do- your state and testing company needs to hear your input on what students are exposed to.  These tests mold students’ personalities over the years, it tells students if they are smart, or dumb, and it even tells them specifically what areas they fail in.  I realize I am being overly dramatic and negative right now, but I want that to sink in for those of you reading this.  No matter what we tell them about those scores, no matter how positive anyone is about testing, constant reminders of shortcomings and failures build up.

Most state education websites have a place for you to sign up for these advisory committees, and I would strongly suggest participating in at least one during your teaching career.  One reason I suggest this is: you get a great idea of beliefs of teachers, testing officials, your department of education, and how students are viewed at these meetings.  Most of that stuff is not pretty, at all as long as you don’t have your rose colored glasses on (like I did the first time I attended one).  I have been on three of these so far, and for some reason they keep calling me back- which is good for the students I work with because they have no voice at these meetings and are in no way represented or considered if I don’t.

This is where the problem happens.  For these committees, there are typically a small number of teachers representing different demographics of our state.  I represent a small school (less than 350 students) and teach a large number of Native American students.  Typically I find that other than small school connections, the other teachers really have no idea how my students approach or think about tests or test items.


When we look at data, we get the problem, the answers, data about how students performed on that pilot problem, and rationale about why the answers were chosen for the problem (on multiple choice questions).  The biggest data they look at is a value that indicates how difficult a problem is. They also have an indicator on how “relevant” that difficulty score is- meaning is it just guessing or not.  The problem I have is that many of the other teachers in the group were fine with a lot of these questions, but I was not… here’s why.

  1. Many of the teachers I worked with were from privileged schools: middle to upper class white schools OR charter schools where enrollment is screened.  Coming from a background of teaching students of poverty or withing a juvenile justice center, viewpoints on students norms are vastly different.
  2. Many of the questions they considered “hard” or “cognitively difficult” were word problems.  This becomes a test question on a student’s language skills, not math skills or mathematical thinking.  When I stripped all of the language barriers out of these problems, they were not mathematically challenging to solve and I would estimate 80% of my students could easily find the correct solution.  So my question every time was: is there a better way we could ask this question?

I questioned many of these pilot questions for our testing.  They were not especially challenging math problems for students to contemplate; they were problems testing a student’s vocabulary skills, what background knowledge they had compared to the writer’s, and their ability to recognize what facts they needed for the problem and what they didn’t. 

That last statement I do feel is important, students need to be able to determine what they need for a problem and what is irrelevant.  That is a very crucial part of mathematical thinking and problem solving.  The problem I have is when it is shrouded in context that is not familiar or attainable for students.  There was never a time that all students lived in pleasantville, yet those contexts have been are are used for word problems that “relate to all students.”  This thinking needs to stop.  Many of my students don’t have the privilege of their own space to call home, a bed or even a meal to look forward to.  They can’t relate to problems based in context that is a fantasy world for them.


We, as a math community, need to figure out a way to present mathematically challenging problems to students without providing a language or reading barrier, because that is what we ultimately end up measuring- not their mathematical ability.  I kept getting the the statement thrown at me, “Well Bryan, how many of your students would actually see this problem?”  MY students?  Are you kidding?  They are OUR students, our schools, our communities, our country, our world.  We need to change our thinking about what experiences our students have, how we present problems, and what language we use.


They are my kids, and I need to stand up for them so they have the same opportunity to demonstrate their mathematical mastery as any other student that is taught in our country.


Summer is closing…

I know my twitter feed and blog has collected some cobwebs and dust this summer, and that’s a norm for me.  Summer is a time where I immerse myself in my favorite pass time- my family.  This summer has been extra crammed with the Master’s courses I have taken- just one more year, just one more year…

Things will be firing up here again soon, I hit the classroom the day after labor day.

See you all again soon.

Food for thought…

So I found this blog and this article got me thinking about the current direction of testing.


Kids Do Worse on Computers When Taking Tests


A growing number of studies conclude that students perform worse on tests when they take them online than when the questions are on paper.

A study published by MIT and conducted at the U.S. Military Academy found that the students who did not use computers scored significantly higher than those who did.

The researchers suggested that removing laptops and iPads from classes was the equivalent of improving the quality of teaching.

The study divided 726 undergraduates randomly into three groups in the 2014-15 and 2015-16 academic years. The control group’s classrooms were “technology-free,” meaning students were not allowed to use laptops or tablets at their desk. Another group was allowed to use computers and other devices, and the third group had restricted access to tablets.

“The results from our randomised experiment suggest that computer devices have a substantial negative effect on academic performance,” the researchers concluded, suggesting that the distraction of an electronic device complete with internet access outweighed their use for note-taking or research during lessons.

The research had an unusual twist: the students involved were studying at the West Point academy in the US, where cadets are ruthlessly ranked by exam results, meaning they were motivated to perform well and may have been more disciplined than typical undergraduates.

But even for the cream of the US army’s future crop, the lure of the digital world appears to have been too much, and exam performance after a full course of studying economics was lower among those in classes allowed to use devices.

“Our results indicate that students perform worse when personal computing technology is available. It is quite possible that these harmful effects could be magnified in settings outside of West Point,” the researchers concluded.

The Hechinger Report reported that writing online essays may contribute to a widening of the achievement gap.

The U.S. Department of Education launched a study of fourth graders using computers for writing compared to fourth graders using paper and pencil.

High-performing students did substantially better on the computer than with pencil and paper. But the opposite was true for average and low-performing students. They crafted better sentences using pencil and paper than they did using the computer. Low-income and black and Hispanic students tended to be in this latter category.

“(T)he use of the computer may have widened the writing achievement gap,” concluded the working paper, “Performance of fourth-grade students in the 2012 NAEP computer-based writing pilot assessment.” If so, that has big implications as test makers, with the support of the Department of Education, move forward with their goal of moving almost all students to computerized assessments, which are more efficient and cheaper to grade.
In the study, high-performing students — the top 20 percent of the test takers — produced an average of 179 words per assignment on the computer, three times the number of words that the bottom 20 percent produced. They also used spellcheck, backspace and other editing tools far more often. The researchers found that these high-performing students were more likely to have access to a computer and the Internet at home.

But these high achievers were in the minority. More than two-thirds of fourth-graders’ responses received scores in the bottom half of a 6-point scoring scale that rated grammar and writing quality. Overall, the average fourth-grader typed a total of 110 words per assignment, far less than the 159-word average on the 2010 paper test.

In looking for explanations for the disparity in performance, it seems likely that the high-performing students are more familiar with computers than low-performing students or even those in the middle.

But it is also likely, at least to me, that it is easier to read and re-read a passage when it is on paper than to read it online. Some young children may have difficulty scrolling up and down the page.

And there may be a difference in recall associated with the medium. That requires further study.

Let me confess that I have tried and failed to read books on a Kindle or similar device. It is easy to lose your place; it is hard to find it again. Maybe the difficulty is age-related; after all, I have only been using a computer for 32 years and began using it as an adult. Children who grow up in the digital age may not have the same visual problem that I have in reading large blocs of text. But it will take more studies to figure out when it is beneficial to use the computer and when it is not. Unfortunately policymakers have rushed into online instruction and online assessments on the assumption (untested) that there are no downsides. They do this, as the Hechinger Report says, because the computer makes it easier and cheaper to grade tests. Standardization has some benefits. But it also has drawbacks. We should be aware of both.