Articles on this Page
- 10/01/13--08:52: _Scarlett Johansson ...
- 10/11/13--09:52: _The SAT Writing Tes...
- 01/07/14--02:20: _This Chart Shows Th...
- 01/21/14--11:00: _The 25 US High Scho...
- 01/26/14--16:14: _'Tiger Mom': Data P...
- 03/04/14--10:05: _Why Major Companies...
- 03/05/14--11:36: _The Entire Strategy...
- 03/05/14--14:11: _The Biggest Changes...
- 03/06/14--09:58: _Here's What A Quest...
- 03/06/14--15:02: _An IQ Expert Says T...
- 03/07/14--10:09: _The SAT May Have Be...
- 03/10/14--12:29: _Kids Who Come From ...
- 03/14/14--13:28: _Everyone Hated The ...
- 04/15/14--12:30: _SATs Are Actually A...
- 04/16/14--03:47: _Here's A Preview Of...
- 04/18/14--09:27: _Here Are Sample Que...
- 05/14/14--10:54: _The States With The...
- 06/08/14--17:41: _Write Four Good Ess...
- 06/26/14--08:52: _How To Outsmart Any...
- 08/11/14--13:06: _The Colleges With T...
- 10/01/13--08:52: Scarlett Johansson Reveals Her 'Pretty Low' SAT Scores
- 10/11/13--09:52: The SAT Writing Test Is A Disgrace
- 01/07/14--02:20: This Chart Shows The Average SAT Scores In Every State
- 01/21/14--11:00: The 25 US High Schools With The Highest Standardized Test Scores
- 03/05/14--11:36: The Entire Strategy For Scoring High On The SAT Just Changed
- 03/06/14--09:58: Here's What A Question On The New SAT Will Be Like
- 03/06/14--15:02: An IQ Expert Says The SAT Changes Aren't A Big Deal
- 03/10/14--12:29: Kids Who Come From Richer Families Have Higher SAT Scores
- 03/14/14--13:28: Everyone Hated The Old SAT
- 04/15/14--12:30: SATs Are Actually Astonishingly Good At Predicting Success
- 04/16/14--03:47: Here's A Preview Of The Redesigned SAT Exam
- 04/18/14--09:27: Here Are Sample Questions From The Redesigned SAT
- 05/14/14--10:54: The States With The Smartest High School Students
- 06/08/14--17:41: Write Four Good Essays For Bard College And You're In, Period
- 06/26/14--08:52: How To Outsmart Any Multiple-Choice Test
- 08/11/14--13:06: The Colleges With The Highest SAT Scores
"Black Swan" Director Darren Aronofsky spoke with Scarlett Johansson in this month's Interview magazine — but the actress quickly turned the tables and asked the Harvard-educated indie film director about his test taking skills.
Here's how the convo went down:
JOHANSSON: So what was your SAT score?
ARONOFSKY: I really have no idea. You go first.
JOHANSSON: I think the way it worked when I took them was that they were out of 1,600, so maybe you'd get a 1,240 if you were a smarty-pants. I got a 1,080, which was pretty low. But that was probably because I didn't answer half of the math questions.
Johansson doesn't elaborate on why she didn't answer the math questions, but The Daily Mail notes that the 28-year-old actress would have taken the SATs around 2002, at which time the average was 1020 — making 1,080 a slightly above-average score.
Still, she's no Natalie Portman.
Instead, Johansson says, "I was a big song-and-dance type of kid — you know, one of those kids with jazz hands. I liked to improvise and do weird vocal exercises. I was a major ham — if you can believe!'
This past Saturday, several hundred thousand prospective college students filed into schools across the United States and more than 170 other countries to take the SAT—$51 registration fees paid, No. 2 pencils sharpened, acceptable calculators at the ready.
And as part of the three-hour-and-45–minute ritual, each person taking the 87-year-old test spent 25 minutes drafting a prompt-based essay for the exam’s writing section.
This essay, which was added to the SAT in 2005, counts for approximately 30 percent of a test-taker’s score on the writing section, or nearly one-ninth of one’s total score. That may not seem like much, but with competition for spots at top colleges and universities more fierce than ever, performance on a portion of the test worth around 11 percent of the total could be the difference between Stanford and the second tier. So it’s not surprising that students seek strategies and tips that will help them succeed on the writing exercise. Les Perelman, the recently retired former director of MIT’s Writing Across the Curriculum program, has got a doozy.
To do well on the essay, he says, the best approach is to just make stuff up.
“It doesn’t matter if [what you write] is true or not,” says Perelman, who helped create MIT’s writing placement test and has consulted at other top universities on the subject of writing assessments. “In fact, trying to be true will hold you back.” So, for instance, in relaying personal experiences, students who take time attempting to recall an appropriately relatable circumstance from their lives are at a disadvantage, he says. “The best advice is, don’t try to spend time remembering an event,” Perelman adds, “Just make one up. And I’ve heard about students making up all sorts of events, including deaths of parents who really didn’t die.”
This approach works, and is advisable, he suggests, because of how the SAT essay is structured and graded. Here’s a typical essay prompt taken from the College Board website. It follows a short, three-sentence passage noting that people hold different views on the subject to be discussed:
Assignment: Do memories hinder or help people in their effort to learn from the past and succeed in the present? Plan and write an essay in which you develop your point of view on this issue. Support your position with reasoning and examples taken from your reading, studies, experience, or observations.
After spending a few moments reading a prompt similar to that one, test takers have 25 minutes in which to draft a submission that will be scored on a 1-to-6 scale. (No scratch paper is provided for outlining or essay planning.) Most students choose to write what is referred to as “the standard five-paragraph essay”: introductory and concluding paragraphs bookending three paragraphs of support in between. Each essay is later independently graded by two readers in a manner that harkens to the famous I Love Lucy scene wherein Lucy and Ethel attempt to wrap chocolate candies traveling on an unrelenting conveyer belt.
Anne Ruggles Gere, a professor at the University of Michigan, serves as director of the Sweetland Center for Writing, which oversees first-year writing at the university. She speaks with SAT essay-graders often. “What they tell me is that they go through a very regimented scoring process, and the goal of that process is to produce so many units of work in a very short period of time,” she says. “So if they take more than about three minutes to read and score these essays, they are eliminated from the job of scoring.” According to Perelman, especially speedy graders are rewarded for their efforts. “They expect readers to read a minimum of 20 essays an hour,” he says. “But readers get a bonus if they read 30 essays an hour, which is two minutes per essay.”
Gere and Perelman aren’t the only ones who know about the demands placed upon SAT essay graders. Many students do, too. Those with a firm grasp of what time-pressured essay-readers care about—and, to be sure, what things they don’t care about—can increase their chances at a high score by resorting to all sorts of approaches that are, shall we say, less than ideal. For starters, facts don’t just take a back seat when it comes to describing personal experiences on the SAT essay; they don’t matter in general.
“There’s really no concern about factual accuracy,” says Gere. “In fact, the makers of the SAT have indicated that in scoring it really doesn’t matter if you say that the War of 1812 occurred in 1817. The complete lack of attention to any kind of accuracy of information conveys a very strange notion of what good writing might be.”
That’s one way of putting it. Perelman, who has trained SAT takers on approaches for achieving the highest possible essay score, has another.
“What they are actually testing,” he says, “is the ability to bullshit on demand. There is no other writing situation in the world where people have to write on a topic that they’ve never thought about, on demand, in 25 minutes. Lots of times we have to write on demand very quickly, but it’s about things we’ve thought about. What they are really measuring is the ability to spew forth as many words as possible in as short a time as possible. It seems like it is training students to become politicians.”
Graders don’t have time to look up facts, or to check if an especially uncommon word actually exists, or perhaps even to do anything more than skim an essay before making a grading determination. Score-savvy essay writers can figure out what might catch the eye of a skimmer.
“I tell students to always use quotations, because the exam readers love quotations,” Perelman says. “One of the other parts of the formula is use big words. Never usemany, always use myriad or plethora. Never say bad, always use egregious.”
Of course, according the College Board website that millions of students have used to prepare for the exam, “there are no shortcuts to success on the SAT essay.” And the country’s largest test prep company, Kaplan, does not teach such approaches. (Disclosure: Kaplan is owned by the soon-to-be-renamed Washington Post Company, which also owns Slate.)
Kaplan’s director of SAT and ACT programs, Colin Gruenwald, tutors students, helps write the company’s curriculum, and trains Kaplan teachers. He says throwing around “big words” in an attempt to influence essay readers is an unnecessarily risky endeavor. He insists that the scoring model is a holistic one that focuses on the overall impression of one’s writing skills. “The point is to demonstrate that you have command of the language, that you are able, in a pressure environment, to sit down and formulate coherent and persuasive thoughts,” he says. Students need to include certain components, he notes. “But that’s not a trick. That’s not a gimmick. That’s just good education.”
Whether verifiably true facts, or an argument that supports a position one actually believes in, are among those necessary components is unclear. What if, for instance, a student comes across an essay prompt that she has a strong opinion about, but can think of better arguments for the opposing position? “The positive side to writing what you believe is that you are more likely to be enthusiastic and passionate,” Gruenwald says. “The ideas may come more smoothly. You may be able to make a very compelling argument. But if you find that there is the side you agree with, but then there is the side that you can come up with a list of really good points for, take the side that you can come up with the list of really good points for. That’s just good demonstration. Because what you are trying to do is demonstrate that you have the writing competency to succeed at the college level. That’s not really dependent upon your opinion of the subject.” And, he admits, “It’s not even related to your grasp of the facts, necessarily.”
For university educators like Perelman and Gere, such realities become part of a trickle-down-type problem. Because of the great importance students, parents, and college admissions officers place on the SAT—as well as the large sums of money that many families spend on outside test prep—high school writing instructors are placed in a bind. “Teachers are under a huge amount of pressure from parents to teach to the test and to get their kids high scores,” Perelman says. They sometimes have to make a choice, he adds, between teaching writing methods that are rewarded by SAT essay-readers—thereby sending worse writers out into the world—or training pupils to write well generally, at the risk of parent complaints about their kids not being sufficiently prepared for the SAT. “And sometimes when they get that pushback, that means they don’t get a promotion, or get a lower raise. So it actually costs them to be principled. You’re putting in negative incentives to be good teachers.”
Gere says the end result of that dynamic shows up when students arrive at college. “I think it’s a very large problem, one that I’m concerned about, and one that we deal with a lot here,” she adds. “What happens is in first-year writing, the typical pattern is that students come in pretty well equipped to write the five-paragraph essay, and much of first-year writing is a process of undoing that.”
College professors, according to Gere, expect their students to be able to demonstrate evidence-based argument in their writing. This involves reading and synthesizing materials that offer multiple perspectives, and writing something that shows students are able to navigate through conflicting positions to come up with a nuanced argument. For those trained in the five-paragraph, non-fact-based writing style that is rewarded on the SAT, shifting gears can be extremely challenging. “The SAT does [students] no favors,” Gere says, “because it gives them a diminished view of what writing is by treating it as something that can be done once, quickly, and that it doesn’t require any basis in fact.”
The result: lots of B.S.
“In our placement tests, you see this all the time, where people continue the B.S., because they just assume that’s what works,” says Perelman. “I think [the SAT essay] creates damage, that it’s harmful.”
College Board President David Coleman just might agree. In September, Coleman seemed to concede that something is amiss with the essay. He raised the possibility of an essay revamp as part of a 2015 SAT overhaul that would focus the writing exercise more on students’ ability to critically analyze a piece of text and craft an essay that draws on the information provided.
That sort of change may seem like a good place to start. (Would it be too much to ask for some scratch paper, too?) But Gere says we should watch what we wish for with respect to changes to the essay format. She notes that as rushed and crazy-seeming as the SAT essay-scoring process is, the fact that real-live humans are reading and grading the essays is a positive. Computerized scoring is now used to grade writing submitted as part of the GMAT and TOEFL exams, among others. And that method of essay-scoring has come under fire from the National Council of Teachers of Englishand others for an array of alleged deficiencies—including an overemphasis on word lengths and other measurables, inaccurate error recognition, and a failure to reward creativity.
An SAT essay based on a longer passage with more detail and a constrained set of acceptable response options would likely result in written works that are much more amenable to machine scoring than the current essays. The forthcoming attempt to “fix” the SAT essay may be less about using a model that better lends itself to more valid assessments of students’ writing skills, or turning out better writers, and more about saving money and time by eventually replacing human essay graders with machines.
“It seems to me pretty clear that’s where the SAT is headed,” Gere says. “So it goes from bad to worse, actually.”
And although other standardized tests—such as the LSAT and certain Advanced Placement exams—include essay components that differ from the SAT in terms of what skills are being tested and how writing submissions are scored, those alternative methods are not without their critics. So there would appear to be no standardized-test-essay panacea.
Kaplan’s Gruenwald notes that there have been rumblings about making the SAT essay optional. And some, he says, have suggested doing away with it altogether. Perelman would have no problem with that option. He notes that there’s one thing he tells every student working to achieve a high score on the SAT essay. “Use this [approach] on the exam,” he says, “but never write like this again.”
Here's a fun chart from Seth Kadish at Vizual Statistix.
It shows average SAT scores by state. It also shows which subject the state did best in. An interesting thing that stands out is that the higher a state's SAT participation rate – that is the percentage of students who took the test with the intent to go to college — the worse students did on average.
Click the chart to enlarge.
NOW WATCH: 13 Simple Mac Shortcuts To Make You More Productive
Here's an awesome chart that looks at the areas of the country that are killing it in the standardized testing arena.
While some universities and employers, like Google and IBM, are downplaying the value of SAT and ACT scores, other research holds that this data is highly reliable and a valid measure of brainpower.
A study from Case Western Reserve University last year showed that standardized testing scores are excellent measures of general cognitive ability. And they continue to play a role in the competitive college admissions process.
With that in mind, school analysis site Niche surveyed 909 public and private high schools between 2012 and 2014, and ranked schools by students' average SAT/ACT scores. The ranking is based on self-reported scores from 75,834 users over the two-year period. In order to qualify, a school had to have responses from at least 100 students.
Without further ado, here are the high schools with the highest standardized test scores:
And here's the complete list in order:
1. Thomas Jefferson High - Alexandria, VA
2. The Harker School - San Jose, CA
3. Dalton School - New York City, NY
4. Stuyvesant High School - New York City, NY
5. Regis High School - New York City, NY
6. Marlborough School - Los Angeles, CA
7. Lynbrook High - San Jose, CA
8. Lick-Wilmerding High School - San Francisco, CA
9. The Hotchkiss School - Lakeville, CT
10. Packer Collegiate Institute - Brooklyn, NY
11. IL Mathematics & Science Academy - Aurora, IL
12. Henry M. Gunn High - Palo Alto, CA
13. Phillips Exeter Academy - Exeter, NH
14. Middlesex School - Concord, MA
15. Monta Vista High - Cupertino, CA
16. Sidwell Friends School - Washington, D.C.
17. Choate Rosemary Hall - Wallingford, CT
18. Mission San Jose High - Fremont, CA
19. Noble & Greenough School - Dedham, MA
20. Leland High - San Jose, CA
21. Ransom Everglades School - Coconut Grove, FL
22. Leland Public School - Leland, MI
23. Ethical Culture Fieldston Middle & Upper - Bronx, NY
24. Brentwood School - Merrimack, NH
25. The Shipley School - Bryn Mawr, PA
NOW WATCH: Top 9 Resume Mistakes To Avoid
SEE ALSO: The 20 Best College Towns In America
It's the No. 1 story on NYTimes.com today.
But in case you missed it, Yale Law professor Amy Chua, better known as "The Tiger Mom," along with her husband, are out with an essay summarizing the views of their latest work, "The Triple Package: How Three Unlikely Traits Explain the Rise and Fall of Cultural Groups in America."
The main thesis of their book is likely to elicit some groans: The most successful cultural groups in America, they write, tend to possess three qualities: an attitude of superiority, a concurrent sense of insecurity, and impulse control. Some groups got 'em, others don't, they argue.
But there was another, more plausible assertion that jumped out to us: Being in America a long time seems to correlate with declining performance.
"Most fundamentally, groups rise and fall over time. The fortunes of WASP elites have been declining for decades. In 1960, second-generation Greek-Americans reportedly had the second-highest income of any census-tracked group.
"Group success in America often tends to dissipate after two generations. Thus while Asian-American kids overall had SAT scores 143 points above average in 2012 — including a 63-point edge over whites — a 2005 study of over 20,000 adolescents found that third-generation Asian-American students performed no better academically than white students."
This stat shows success in America is not about blood:
"The fact that groups rise and fall this way punctures the whole idea of “model minorities” or that groups succeed because of innate, biological differences. Rather, there are cultural forces at work."
This sort of helps clarify the view we got from the book via The New York Post's recent preview: that Chua and her husband believe specific cultural groups are inherently better than others.
But still, Chua and her husband hedge. Any individual, they say, is capable of defying their group's broader performance and develop the "triple package" of successful qualities:
The way to develop this package of qualities — not that it’s easy, or that everyone would want to — is through grit. It requires turning the ability to work hard, to persevere and to overcome adversity into a source of personal superiority. This kind of superiority complex isn’t ethnically or religiously exclusive. It’s the pride a person takes in his own strength of will.
Indeed, New York Magazine's Lisa Miller calls the work a "triumph of caution.""One can almost hear the arguments in the Chua-Rubenfeld household after the publication of Tiger Mother," she writes.
We're not sure if that will make for compelling reading. But these kinds of "secret recipe for success books" always blow out best-seller lists. "The Battle Hymn Of The Tiger Mother," Chua's first salvo on parenting, remains near the top 5,000 best-selling books on Amazon. So this new work will probably get gobbled up anyway.
It doesn't matter how old you are: Next time you interview for a job, be prepared to share your SAT scores.
In the timeless quest to predict future success in employees, a number of employers are turning to candidates' SAT results. Big-name consulting firms such as McKinsey and Bain, as well as banks like Goldman Sachs, are among the companies that ask newly minted college grads for their scores in job applications, according to an article in the Wall Street Journal. Some other companies request scores even from candidates in their 40s and 50s.
For all these job seekers, the SATs are a distant memory. Even the newest Bachelor's recipients tend to be at least four years removed from the test. So why is it that even after people go to college, mature, and gain work experience, employers still care about a standardized test taken in high school?
Jonathan Wai, an intelligence expert and researcher in Duke University's Talent Identification Program (TIP), says the SATs are considered to be a measure of "general intelligence and general ability." That's important because research has shown that general ability "actually predicts occupational success across a range of occupations," he explains.
The SATs also appeal to many hiring managers because they're standardized. In theory, these test scores serve as an equalizer and mediate some of the well-documented biases that normally influence the hiring process. For example, it's been found that managers generally prefer hiring people who are similar to them — be it in education, background, interests, or personality. This is called a "similarity bias."
The beauty of the SATs is that everyone takes them at the same time, with roughly the same level (if not quality) of education. In the broadest sense, someone's score on the SAT offers a glimpse of how they compare to other candidates in terms of general knowledge and ability. "It's standardized, and it's objective in that sense," Wai says.
Jeff Bezos, the billionaire founder and CEO of Amazon, is one of the most famous proponents of using SAT scores in hiring decisions. Bezos scored highly on a standardized IQ test when he was only 8 years old, and in his early days as a manager, he liked to ask candidates for their SAT results in interviews he conducted. He has said that"hiring only the best and brightest was key to Amazon's success."
Despite the benefits of using SATs in hiring, Wai admits the test is "not a perfect measure by any means" and questions how widespread its use as a hiring tool really is. He's also not sure for how long the scores should be considered valid. After all, he points out, scores on the GRE — the graduate school entrance exam — are good for only five years. That raises an obvious question for companies like Cvent Inc., which asks for SAT or equivalent ACT scores from job applicants of all ages, the Wall Street Journal reports.
Wai hopes the emphasis employers like Cvent Inc. put on standardized tests won't encourage recent college graduates and experienced workers to start retaking the SATs. For one, people's scores probably wouldn't change much, he says. Additionally, the SAT would no longer offer the benefit of standardization if people were reporting scores from all different points in their lives instead of the score they got in high school.
College Board officials announced major changes to the SATs Wednesday, the first significant revisions since 2005.
Among the changes, according to the Associated Press, are that the SAT essay will now be optional for test-takers, there is no more penalty for incorrect answers, and an emphasis on more practical questioning.
The math and reading sections will remain, but the SAT will again be graded out of 1600 possible points, with the option of adding an essay score.
According to the AP, the grading of the now-optional essay will also change. "It will measure students' ability to analyze and explain how an author builds an argument, instead of measuring the coherence of the writing but not the quality or accuracy of the reasoning," they report.
"It will be up to school districts and colleges the students apply to as to whether the essay will be required," the AP reports.
By doing away with the wrong answer penalty, students will no longer be discouraged from guessing. Previously, test-takers would lose .25 points for each wrong answer.
Additionally, both the math and reading sections — which will still be required for test-takers — are shifting away from more obscure questioning.
So called "SAT words" will be phased out and replaced with "vocabulary words that are widely used in college and career," according to The Washington Post.
The math section will also focus its questions, the AP reports. "Instead of testing a wide range of math concepts, the new exam will focus on a few areas, like algebra, deemed most needed for college and life afterward," they note. Calculators will also only be allowed on certain sections, instead of throughout the entire test.
These changes will go into effect in 2016.
This post will continue to be updated with more information on the SAT changes.
The biggest shift in the new SAT is not that the test will no longer require an essay or penalize students for wrong answers — it's that the entire focus of the exam will change, to the benefit of student test-takers.
While the first two changes are definitely worth any accompanying excitement, the core of the test appears to be universally shifting towards more practical questions and analysis. Put more simply, the SAT is no longer trying to find out how well you can memorize a set of flashcards or use a calculator.
Some of these changes may be viewed as a response to the increasingly popular ACT test, which is considered more "practical" and recently overtook the SAT as the most popular college entrance exam. College Board, which runs the SAT, said that the changes are in part meant to emphasize what students are already learning in school.
"It is time for an admissions assessment that makes it clear that the road to success is not last-minute tricks or cramming, but the challenging learning students do each day," College Board president David Coleman said in his announcement Wednesday.
While no questions have been released off the new test, these are definitely major changes that will ideally make the SAT a more practical test that's likely easier to prepare for. Using this helpful breakdown of the new SAT from The Washington Post as a reference, here's why these changes are more impactful than they may seem:
The SAT math section will narrow its focus on a few core topics, rather than question students from a broad range of high-school level math fields. According to the Associated Press, "the new exam will focus on a few areas, like algebra, deemed most needed for college and life afterward."
Additionally, while calculators were previously allowed throughout the entire math portion of the SAT, students can now only use them in certain sections. This will likely force the format of these questions — as well as the numbers used — to be simpler and more manageable, as test-takers will no longer have the help of an automated number cruncher.
The reading portion — currently known as "critical reading"— will combine with multiple-choice writing questions to form a new "evidence-based reading and writing" section. There will no longer be "sentence completion" questions, among other changes.
There will also be a significant shift in the types of words used for vocabulary questions. The test plans to move away from so called "SAT words"— notorious for popping up in questions even though they're very rarely used outside of the test — and will now favor "words that are widely used in college and career," The Washington Post reports.
One example of this could be "synthesis," which while potentially tricky to define, is not as obscure as a word such as "phlegmatic."
Another way the new SAT will employ "real world" applicability will be in its choice of reading analysis passages, which will now be opened up to include potential examples from science, history, and social studies, according to The Washington Post. Questions will also ask test-takers to identify the specific part of a passage that supports their answer.
This practicality will also be demonstrated in a shift away from unrecognizable reading analysis passages, and towards documents that more student would be familiar with, such as the Declaration of Independence.
While the biggest change in the SAT's newest section — the essay — is that it is now optional, students who choose to complete it will also be graded differently than in years past.
Since the essay was introduced in 2005, SAT graders focused more on structure of the essay and argument rather than what evidence was actually being cited. Theoretically, a student could fabricate all of their supporting examples and not be penalized.
That is no longer the case. Now, students who choose to submit an SAT essay will be explicitly evaluated on the concrete examples they use and how they're employed and analyzed as evidence.
The New York Times Magazine has published a great behind-the-scenes feature about the new SAT changes, which includes the clearest look yet at what a question may look like on the test.
The reformatted college admissions exam was announced Wednesday by College Board President David Coleman. Among other major changes, the new SAT will shift towards asking test-takers more practical questions, rather than quizzing them on obscure vocabulary words.
While there have not been any official new questions released by College Board, Coleman walked The Times through what he described as a "simplistic example of the kind of question that might be on this part of the exam." Here's how described a potential question in the reading section of the new SAT:
Students would read an excerpt from a 1974 speech by Representative Barbara Jordan of Texas, in which she said the impeachment of Nixon would divide people into two parties. Students would then answer a question like: "What does Jordan mean by the word 'party'?" and would select from several possible choices. This sort of vocabulary question would replace the more esoteric version on the current SAT...
The Barbara Jordan vocabulary question would have a follow-up — "How do you know your answer is correct?"— to which students would respond by identifying lines in the passage that supported their answer. (By 2016, there will be a computerized version of the SAT, and students may someday search the text and highlight the lines on the screen.) Students will also be asked to examine both text and data, including identifying and correcting inconsistencies between the two.
This line of questioning seems to be in keeping with the new SAT's goal of assessing students on subjects they encounter in the classroom. Coleman also expanded on how the exam is changing the focus of their vocabulary testing, according to The Times:
The idea is that the test will emphasize words students should be encountering, like "synthesis," which can have several meanings depending on their context. Instead of encouraging students to memorize flashcards, the test should promote the idea that they must read widely throughout their high-school years.
READ MORE: The Story Behind The SAT Overhaul
College Board officials made waves yesterday when they announced the first major changes to the SAT since 2005.
The overhaul will bring scoring back to a 1,600-point scale, eliminate penalties for wrong answers, and make the essay section optional. Arcane vocabulary words will be replaced with ones people might actually use. The modifications have been described as "sweeping revisions" and a "fundamental rethinking."
But intelligence expert Douglas Detterman thinks they aren't a big deal.
"The changes are relatively minor," said Detterman, a professor emeritus at Case Western Reserve University and founder of the scientific journal Intelligence. "I don't think it will change much."
Detterman is concerned with the "psychometrics" of the SAT — the academic term for what the test measures and how it does that. In the case of the SAT, which is designed to measure general intelligence in the core areas of math and reading/writing, he says the newly announced changes should have little effect on what the test assesses.
So why do it? In recent years, the SAT has been criticized for being disconnected from the academic curriculum at most schools, and its effectiveness questioned. Many worry that the standardized assessment — once introduced as a great equalizer — has simply turned into a massive business that privileges the rich and disadvantages the poor. Detterman thinks the new changes have mostly been designed to bolster the SAT's public image.
For example, take the planned changes to vocabulary questions. The AP reported that the College Board plans to throw out words like "prevaricator" and "sagacious" in favor of words that are more likely to be used in school or the workplace, such as "synthesis" and "empirical." Detterman suspects this is being done mostly for "public relations."
"On vocabulary items I don't think it matters what kind of words are used," he said. "It's the difficulty of the words, and if you want to use words that are more related to business that's fine."
Then there's the decision to drop the essay. Detterman says this is not surprising, as the section is likely expensive and time consuming to grade. On top of that, a common critique of the SAT essay is that scoring standards are inconsistent and unreliable. According to an article in the New York Times Magazine, SAT coaches believe earning high marks can be as simple as using plenty of details (regardless of whether those details are accurate), writing long, and periodically inserting fancy-sounding words like "plethora."
The one change that Detterman does support from a psychometrics standpoint is eliminating the 0.25-point penalty for wrong guesses. The SAT is what's known as a "power test"— an assessment designed to get participants to answer as many questions correctly as possible. For that particular kind of test, Detterman says it doesn't make sense to penalize people for wrong guesses, thus deterring them from answering questions.
Finally, in response to the criticism that the SAT privileges the rich and hurts the poor and lower middle class, the College Board on Wednesday announced a new partnership with nonprofit learning service Khan Academy that will provide free SAT test prep materials. Will it matter? Again, Detterman says it may come down to image more than anything else. Data show that most prep courses are fairly ineffective at raising students' test scores, though some benefit can come from familiarity with the style of the test.
"The SAT is nothing more than an intelligence test. It's an intelligence test, and you can't really prepare for it," he says. "So I don't think that makes that much difference."
Wednesday, the College Board, the group responsible for the SAT, announced changes that included removing difficult vocabulary and making the essay portion of the exam optional. Most news reports accepted the College Board's purported reason for changing the SAT: The non-profit wanted to more accurately reflect the schoolwork completed in high school and needed in college.
The New York Times's headline reported the College Board's goal was for the SAT to "realign with schoolwork." CNN also reported, with little skepticism, that the purpose was to connect the test to high schools and create, in the words of College Board President and CEO David Coleman, "more college-ready students.
But there's another reason to make the test more appealing to students: improving the College Board's financial outlook.
The SAT faces two challenges. First, the ACT, a competing test, has slowly gained market share, even passing the SAT in total number of test takers in 2012 . Second, the trend of "test flexible" universities is spreading, with top-100 schools like the University of Rochester, Brandeis, and Wake Forest accepting alternatives like graded exams, extracurricular activities, or simply high-school GPA.
Why might these trends be a problem? The College Board is a non-profit, but one with a yearly revenue of more than $750 million, according to the group's most recent publicly available 990 form. The president at the time of the 990, Gaston Caperton, made more than $1.5 million, including incentive and deferred compensation; 22 other employees earned at least $200,000. This is a sprawling non-profit deeply connected to higher education in the United States, with a budget that reflects that omnipresence.
The SAT clearly plays a role in this budget. In 2012, 1.6 million students took the SAT, and as the New York Times reported, today the number is likely higher. The 2014 test will cost students (or their parents, often) $51. Waitlist registration, available to those who miss registration deadlines, costs an additional $45. Changing the date of your test runs you $27.50. The details of the College Board's revenue stream aren't publicly available, but SAT administration and all of its related paraphernalia — The Official SAT Study Guide with DVD ($31.99), the SAT Score Verification Services ($18), the SAT Online Course (just $69.95 a year!) — surely make up a sizable portion. Chadwick Matlin of Slate.com conservatively estimated this combined revenue at around $115 million back in 2006.
It's worth noting here that the twin goals of accurate student measurement and revenue maximization aren't mutually exclusive. For instance, one common complaint of the SAT is that its scores are actually a worse indicator for college success than high school test scores. In that case, a more accurate test would probably be good for students and universities, but also better for the College Board's bottom line. Finally, the College Board is also partnering to make a free test-prep course with the Khan Academy, a move clearly not taken to generate more revenue.
Regardless, journalists should stop reporting the SAT's reforms as if they were the result of a few good-hearted education advocates at an NGO, rather than a business desperately trying to keep a core revenue stream intact.
"All In With Chris Hayes" on MSNBC ran an interesting segment last Friday covering how SAT scores trend with family income.
This segment followed a series of announced changes for the college entrance exam, which College Board officials hope will equalize what has become an increasing divided test. Critics say many factors — such as SAT prep programs and private school educations — contribute to students from wealthier families scoring higher.
However, according to College Board, it's not entire true to say that a factor such as income has a direct effect on test scores. As they note in their 2013 annual report, these factors "are associated with educational experiences both on tests such as the SAT and in schoolwork." More specifically, something like income or parents' education background contribute to an environment that may allow a student to perform better, but more wealth will not automatically give a student a better score.
Regardless, the relationship between exam scores and income is undeniable. From the show's Twitter, here's a great chart clearly showing the rise of SAT scores as income increases:
Watch the full segment below:
There is a recurring theme to The New York Times' excellent feature behind the scenes of the new SAT changes: everyone hated the old test.
The recent changes to the SAT will likely change how students prepare for and score highly on the college entrance exam. The new test — masterminded by College Board President David Coleman — shifts toward a more practical questioning model than its much-criticized previous incarnation.
According to The Times, when Coleman took over he faced an "array of complaints coming from all of the College Board’s constituencies: teachers, students, parents, university presidents, college-admissions officers, high-school counselors. They were all unhappy with the test, and they all had valid reasons."
One of the loudest SAT opponents was Les Perelman, a writing director at MIT. According to The Times, Perelman tried to expose what he saw as major flaws with the SAT essay by creatively coaching students on how to beat the system:
His earliest findings showed that length, more than any other factor, correlated with a high score on the essay. More recently, Perelman coached 16 students who were retaking the test after having received mediocre scores on the essay section. He told them that details mattered but factual accuracy didn't. "You can tell them the War of 1812 began in 1945," he said. He encouraged them to sprinkle in little-used but fancy words like "plethora" or "myriad" and to use two or three preselected quotes from prominent figures like Franklin Delano Roosevelt, regardless of whether they were relevant to the question asked.
The explicit lack of fact-checking on the SAT essays allowed Perelman's students to post much higher scores when they retook the test, The Times reports.
Additionally, the people whom the SAT may impact the most — students and teachers — found that there were major problems with the test. As The Times reports:
Students despised the SAT not just because of the intense anxiety it caused — it was one of the biggest barriers to entry to the colleges they dreamed of attending — but also because they didn't know what to expect from the exam and felt that it played clever tricks, asking the kinds of questions they rarely encountered in their high-school courses. Students were docked one-quarter point for every multiple-choice question they got wrong, requiring a time-consuming risk analysis to determine which questions to answer and which to leave blank. Teachers, too, felt the test wasn't based on what they were doing in class, and yet the mean SAT scores of many high schools were published by state education departments, which meant that blame for poor performances was often directed at them.
Another major perceived fault of the old SAT was its seemingly intertwined relationship with a family's income. Students who came from wealthier families tended to do better than students whose families had less money and, as Coleman told The Times, it was clear that "no parents, whatever their socioeconomic status, were satisfied" with the test.
As The Times reports:
The achievements of children from affluent families were tainted because they "bought" a score; those in the middle class cried foul because they couldn't get the "good stuff" or were overextended trying to; and the poor, often minority students, were shut out completely.
The Times also describes a meeting Coleman had with Wade Henderson, the president and C.E.O. of the Leadership Conference on Civil and Human Rights. Henderson spoke with the College Board head about "the ill will that had been built up in the minority community over the SAT, how the test has long been viewed not as a launching pad to something better but as an obstacle to hard-working, conscientious students who couldn't prepare for it in the way more affluent students could."
More recently, the SAT has also begun to receive criticism from inside the ivory tower of academia. Since 2008, many colleges have dropped the SAT as a requirement, making it and competitor test the ACT optional for student applicants.
According to The Times, "many of the admissions officers [Coleman] spoke with made it clear that they were uncomfortable being beholden to the test, at least to this test."
While it is still unclear how much of an impact these changes will have on the SAT, it seems like anything would be better than the old test. Whatever the merits are of the college entrance exam, it does appear that the people behind the SAT are listening to the numerous complaints.
The College Board—the standardized testing behemoth that develops and administers the SAT and other tests—has redesigned its flagship product again.
Beginning in spring 2016, the writing section will be optional, the reading section will no longer test “obscure” vocabulary words, and the math section will put more emphasis on solving problems with real-world relevance.
Overall, as the College Board explains on its website, “The redesigned SAT will more closely reflect the real work of college and career, where a flexible command of evidence—whether found in text or graphic [sic]—is more important than ever.”
A number of pressures may be behind this redesign.
Perhaps it’s competition from the ACT, or fear that unless the SAT is made to seem more relevant, more colleges will go the way of Wake Forest, Brandeis, and Sarah Lawrence and join the “test optional admissions movement,” which already boasts several hundred members.
Or maybe it’s the wave of bad press that standardized testing, in general, has received over the past few years.
Critics of standardized testing are grabbing this opportunity to take their best shot at the SAT. They make two main arguments. The first is simply that a person’s SAT score is essentially meaningless—that it says nothing about whether that person will go on to succeed in college. Leon Botstein, president of Bard College and longtime standardized testing critic, wrote in Time that the SAT “needs to be abandoned and replaced,” and added:
The blunt fact is that the SAT has never been a good predictor of academic achievement in college. High school grades adjusted to account for the curriculum and academic programs in the high school from which a student graduates are. The essential mechanism of the SAT, the multiple choice test question, is a bizarre relic of long outdated 20th century social scientific assumptions and strategies.
Calling use of SAT scores for college admissions a “national scandal,” Jennifer Finney Boylan, an English professor at Colby College, argued in the New York Times that:
The only way to measure students’ potential is to look at the complex portrait of their lives: what their schools are like; how they’ve done in their courses; what they’ve chosen to study; what progress they’ve made over time; how they’ve reacted to adversity.
Along the same lines, Elizabeth Kolbert wrote in The New Yorker that “the SAT measures those skills—and really only those skills—necessary for the SATs.”
But this argument is wrong. The SAT does predict success in college—not perfectly, but relatively well, especially given that it takes just a few hours to administer. And, unlike a “complex portrait” of a student’s life, it can be scored in an objective way. (In a recent New York Times op-ed, the University of New Hampshire psychologist John D. Mayer aptly described the SAT’s validity as an “astonishing achievement.”)
In a study published in Psychological Science, University of Minnesota researchers Paul Sackett, Nathan Kuncel, and their colleagues investigated the relationship between SAT scores and college grades in a very large sample: nearly 150,000 students from 110 colleges and universities.
SAT scores predicted first-year college GPA about as well as high school grades did, and the best prediction was achieved by considering both factors. Botstein, Boylan, and Kolbert are either unaware of this directly relevant, easily accessible, and widely disseminated empirical evidence, or they have decided to ignore it and base their claims on intuition and anecdote—or perhaps on their beliefs about the way the world should be rather than the way it is.
Furthermore, contrary to popular belief, it’s not just first-year college GPA that SAT scores predict. In a four-year study that started with nearly 3,000 college students, a team of Michigan State University researchers led by Neal Schmitt found that test score (SAT or ACT—whichever the student took) correlated strongly with cumulative GPA at the end of the fourth year.
If the students were ranked on both their test scores and cumulative GPAs, those who had test scores in the top half (above the 50thpercentile, or median) would have had a roughly two-thirds chance of having a cumulative GPA in the top half. By contrast, students with bottom-half SAT scores would be only one-third likely to make it to the top half in GPA.
Test scores also predicted whether the students graduated: A student who scored in the 95th percentile on the SAT or ACT was about 60 percent more likely to graduate than a student who scored in the 50th percentile.
Similarly impressive evidence supports the validity of the SAT’s graduate school counterparts: the Graduate Record Examinations, the Law School Admissions Test, and the Graduate Management Admission Test.
A 2007 Science article summed up the evidence succinctly: “Standardized admissions tests have positive and useful relationships with subsequent student accomplishments.”
SAT scores even predict success beyond the college years. For more than two decades, Vanderbilt University researchers David Lubinski, Camilla Benbow, and their colleagues have tracked the accomplishments of people who, as part of a youth talent search, scored in the top 1 percent on the SAT by age 13.
Remarkably, even within this group of gifted students, higher scorers were not only more likely to earn advanced degrees but also more likely to succeed outside of academia.
For example, compared with people who “only” scored in the top 1 percent, those who scored in the top one-tenth of 1 percent—the extremely gifted—were more than twice as likely as adults to have an annual income in the top 5 percent of Americans.
The second popular anti-SAT argument is that, if the test measures anything at all, it’s not cognitive skill but socioeconomic status.
In other words, some kids do better than others on the SAT not because they’re smarter, but because their parents are rich. Boylan argued in her Times article that the SAT “favors the rich, who can afford preparatory crash courses” like those offered by Kaplan and the Princeton Review. Leon Botstein claimed in hisTime article that “the only persistent statistical result from the SAT is the correlation between high income and high test scores.”
And according to a Washington Post Wonkblog infographic (which is really more of a disinfographic) “your SAT score says more about your parents than about you.”
It’s true that economic background correlates with SAT scores. Kids from well-off families tend to do better on the SAT. However, the correlation is far from perfect. In the University of Minnesota study of nearly 150,000 students, the correlation between socioeconomic status, or SES, and SAT was not trivial but not huge. (A perfect correlation has a value of 1; this one was .25.) What this means is that there are plenty of low-income students who get good scores on the SAT; there are even likely to be low-income students among those who achieve a perfect score on the SAT.
Thus, just as it was originally designed to do, the SAT in fact goes a long way toward leveling the playing field, giving students an opportunity to distinguish themselves regardless of their background. Scoring well on the SAT may in fact be the only such opportunity for students who graduate from public high schools that are regarded by college admissions offices as academically weak.
In a letter to the editor, a reader of Elizabeth Kolbert’s New Yorker article on the SAT made this point well:
The SAT may be the bane of upper-middle-class parents trying to launch their children on a path to success. But sometimes one person’s obstacle is another person’s springboard. I am the daughter of a single, immigrant father who never attended college, and a good SAT score was one of the achievements that catapulted me into my state’s flagship university and, from there, on to medical school. Flawed though it is, the SAT afforded me, as it has thousands of others, a way to prove that a poor, public-school kid who never had any test prep can do just as well as, if not better than, her better-off peers.
The sort of admissions approach that Botstein advocates—adjusting high school GPA “to account for the curriculum and academic programs in the high school from which a student graduates” and abandoning the SAT—would do the opposite of leveling the playing field. A given high school GPA would be adjusted down for a poor, public-school kid, and adjusted up for a rich, private-school kid.
Furthermore, contrary to what Boylan implies in her Times piece, “preparatory crash courses” don’t change SAT scores much. Research has consistently shown that prep courses have only a small effect on SAT scores—and a much smaller effect than test prep companies claim they do.
For example, in one study of a random sample of more than 4,000 students, average improvement in overall score on the “old” SAT, which had a range from 400 to 1600, was no more than about 30 points.
Finally, it is clear that SES is not what accounts for the fact that SAT scores predict success in college. In the University of Minnesota study, the correlation between high school SAT and college GPA was virtually unchanged after the researchers statistically controlled for the influence of SES.
If SAT scores were just a proxy for privilege, then putting SES into the mix should have removed, or at least dramatically decreased, the association between the SAT and college performance.
But it didn’t.
This is more evidence that Boylan overlooks or chooses to ignore.
What this all means is that the SAT measures something—some stable characteristic of high school students other than their parents’ income—that translates into success in college. And what could that characteristic be? General intelligence.
The content of the SAT is practically indistinguishable from that of standardized intelligence tests that social scientists use to study individual differences, and that psychologists and psychiatrists use to determine whether a person is intellectually disabled—and even whether a person should be spared execution in states that have the death penalty.
Scores on the SAT correlate very highly with scores on IQ tests—so highly that the Harvard education scholar Howard Gardner, known for his theory of multiple intelligences, once called the SAT and other scholastic measures “thinly disguised” intelligence tests.
One could of course argue that IQ is also meaningless—and many have. For example, in his bestseller The Social Animal, David Brooks claimed that “once you get past some pretty obvious correlations (smart people make better mathematicians), there is a very loose relationship between IQ and life outcomes.” And in a recent Huffington Post article, psychologists Tracy Alloway and Ross Alloway wrote that
IQ won’t help you in the things that really matter: It won’t help you find happiness, it won’t help you make better decisions, and it won’t help you manage your kids’ homework and the accounts at the same time. It isn’t even that useful at its raison d'être: predicting success.
But this argument is wrong, too. Indeed, we know as well as anything we know in psychology that IQ predicts many different measures of success.
Exhibit A is evidence from research on job performance by the University of Iowa industrial psychologist Frank Schmidt and his late colleague John Hunter. Synthesizing evidence from nearly a century of empirical studies, Schmidt and Hunter established that general mental ability—the psychological trait that IQ scores reflect—is the single best predictor of job training success, and that it accounts for differences in job performance even in workers with more than a decade of experience.
It’s more predictive than interests, personality, reference checks, and interview performance. Smart people don’t just make better mathematicians, as Brooks observed—they make better managers, clerks, salespeople, service workers, vehicle operators, and soldiers.
IQ predicts other things that matter, too, like income, employment, health, and even longevity. In a 2001 study published in the British Medical Journal, Scottish researchers Lawrence Whalley and Ian Deary identified more than 2,000 people who had taken part in the Scottish Mental Survey of 1932, a nationwide assessment of IQ.
Remarkably, people with high IQs at age 11 were more considerably more likely to survive to old age than were people with lower IQs. For example, a person with an IQ of 100 (the average for the general population) was 21 percent more likely to live to age 76 than a person with an IQ of 85. And the relationship between IQ and longevity remains statistically significant even after taking SES into account.
Perhaps IQ reflects the mental resources—the reasoning and problem-solving skills—that people can bring to bear on maintaining their health and making wise decisions throughout life. This explanation is supported by evidence that higher-IQ individuals engage in more positive health behaviors, such as deciding to quit smoking.
IQ is of course not the only factor that contributes to differences in outcomes like academic achievement and job performance (and longevity).
Psychologists have known for many decades that certain personality traits also have an impact. One is conscientiousness, which reflects a person’s self-control, discipline, and thoroughness. People who are high in conscientiousness delay gratification to get their work done, finish tasks that they start, and are careful in their work, whereas people who are low in conscientiousness are impulsive, undependable, and careless (compare Lisa and Bart Simpson).
The University of Pennsylvania psychologist Angela Duckworth has proposed a closely related characteristic that she calls “grit,” which she defines as a person’s “tendency to sustain interest in and effort toward very long-term goals,” like building a career or family.
Duckworth has argued that such factors may be even more important as predictors of success than IQ. In one study, she and UPenn colleague Martin Seligman found that a measure of self-control collected at the start of eighth grade correlated more than twice as strongly with year-end grades than IQ did.
However, the results of meta-analyses, which are more telling than the results of any individual study, indicate that these factors do not have a larger effect than IQ does on measures of academic achievement and job performance. So, while it seems clear that factors like conscientiousness—not to mention social skill, creativity, interest, and motivation—do influence success, they cannot take the place of IQ.
None of this is to say that IQ, whether measured with the SAT or a traditional intelligence test, is an indicator of value or worth. Nobody should be judged, negatively or positively, on the basis of a test score. A test score is a prediction, not a prophecy, and doesn’t say anything specific about what a person will or will not achieve in life. A high IQ doesn’t guarantee success, and a low IQ doesn’t guarantee failure.
Furthermore, the fact that IQ is at present a powerful predictor of certain socially relevant outcomes doesn’t mean it always will be. If there were less variability in income—a smaller gap between the rich and the poor—then IQ would have a weaker correlation with income. For the same reason, if everyone received the same quality of health care, there would be a weaker correlation between IQ and health.
But the bottom line is that there are large, measurable differences among people in intellectual ability, and these differences have consequences for people’s lives. Ignoring these facts will only distract us from discovering and implementing wise policies.
Given everything that social scientists have learned about IQ and its broad predictive validity, it is reasonable to make it a factor in decisions such as whom to hire for a particular job or admit to a particular college or university. In fact, disregarding IQ—by admitting students to colleges or hiring people for jobs in which they are very likely to fail—is harmful both to individuals and to society.
For example, in occupations where safety is paramount, employers could be incentivized to incorporate measures of cognitive ability into the recruitment process. Above all, the policies of public and private organizations should be based on evidence rather than ideology or wishful thinking.
WASHINGTON (AP) — Anxious students — not to mention their parents — can get a heads-up for how the redesigned SAT might look in two years.
Sample questions for the new version of the college-entrance test were released on Wednesday by the College Board, which announced last month that the new test will include real-world applications and require more analysis. Students will also be asked to cite evidence to show their understanding of texts.
A reading passage provided as an example was adapted from a speech delivered in 1974 by Rep. Barbara Jordan, D-Texas, during the impeachment hearings of President Richard Nixon. Test takers must answer questions that best describe Jordan's stance and the main rhetorical effect of a part of the passage.
Another sample question asks test takers to calculate what it would cost an American traveling in India to convert dollars to rupees. Another question requires students to use the findings of a political survey to answer questions.
The College Board said all the information about the redesigned test, which is due out in 2016, is in draft form and subject to change.
"It is our goal that every student who takes the test will be well informed and will know exactly what to expect on the day of the test," College Board President David Coleman and Cynthia Schmeiser, the College Board's chief of assessment, said in a letter posted online.
Every test will include a passage from the U.S. founding documents, such as the Declaration of Independence, or conversations they've inspired, the College Board has said. The essay section, which is becoming optional, will require students to read a passage and explain how the author constructed an argument.
Other changes to the SAT include making a computer-based version of the test an option, getting rid of the penalty for wrong answers, limiting the use of a calculator to select sections and returning to a 1,600-point scale. The College Board said obscure vocabulary words would be replaced with those more likely to be used in classrooms or on the job, and the math section will concentrate on areas that "matter most for college and career readiness and success."
The SAT was once the predominant college admissions exam, but it has been overtaken in popularity by the ACT.
The ACT, which already offers an optional essay, announced last year that it would begin making computer-based testing available. It said Monday that about 4,000 high school students had taken a digital version of the ACT two days earlier as part of a pilot.
___Follow Kimberly Hefling on Twitter at http://twitter.com/khefling
College Board announced changes to the SAT in March, refocusing the college entrance exam on more practical questions that ideally will compliment what students are learning in school.
The essay portion of the test — introduced about 10 years ago — will now become optional, while the reading portions will look more for evidence-backed answers and the math portions will have different sections for when students can use a calculator. College Board released sample questions from the redesigned test this week.
We've compiled a few of the sample questions that show the range of the new test. Read below and see how you would do (answers at the bottom):
Students would have to answer this math problem without a calculator:
But could use one for these two questions:
This reading question focuses on the student's overall comprehension of what they just read:
While this question asks them to connect a graphic with a passage:
ANSWERS: (C, C, A, C, B)
FOLLOW US! Check Out BI Colleges On Facebook
FindTheBest mapped the states with the smartest high school kids, and it seems the father north you live, the smarter you are.
The research engine looked at scores from the SAT, ACT, AP, and National Assessment of Educational Progress tests from each state's department of education and created a Public School Rating from one to five.
They found that students with the best scores came from New Hampshire (5), Minnesota (4.92), and Massachusetts (4.92), while the worst scores were in Mississippi (2.97).
Check out the map below:
FOLLOW US: On Instagram
I have wonderful news for all you stressed-out parents of kindergartners whose school play was canceled to focus your 5-year-olds on college and career.
Good news, as well, for you anxious 11th-graders, terrified that the Boone’s Farm you drank instead of writing a five-paragraph report on The Red Badge of Courage will ruin your future forever. Fantastic news for anyone who, like a certain friend of mine named “me,” choked on the SAT. And great, great news for anyone apprehensive about the Common Application and its reputation for glitchiness.
Bard College, a highly selective liberal-arts school in Annandale-on-Hudson, New York, is about to enter the second year of a revolutionary college-admissions experiment: four wickedly challenging essays, 2,500 words each, reviewed by Bard faculty (who, I assume, enjoy grading papers). All four essays get a B+ or higher? You’re in, period. No standardized test, no GPA, no CV inflated with disingenuous volunteer work. Last year, 41 students completed what the college is calling the Bard Entrance Exam—and 17 scored high enough to be admitted.
This coming fall, that number may be substantially larger, as the country’s only true alternative application to an elite school gains publicity. I’m certainly doing my part to push it, because the idea is genius.
The ramped-up pressures of admission, especially to elite schools, are very apparent to any Gen Xer (or older) who spends time with millennial college students. There is no way, for example, that 1994-era me would nowadays be admitted to my own alma mater, Vassar College.
Today, perfection is the bare minimum: 4.0 and co-valedictorian, perfect Board scores, spotless extracurriculars, 30 hours a week of volunteering you don’t enjoy, a perfectly platitudinous essay about the Challenges In Your Life That Have Made You a Better Person—and then you might make the first cut. Don’t get me wrong—I love a high school goody two-shoes. But are they really the only people who deserve to go to a great college? Does an entire school full of imagination-bereft perfection not result in the sad demise of collegiate fun?
The Bard Entrance Exam aims for exactly the kind of student who, for any number of reasons, doesn’t fit inside that infernal perfection cage—who is instead, as Bard’s Vice President of Student Affairs and Director of Admissions Mary Backlund told me, “someone who really likes learning,” but perhaps “couldn’t be bothered with what they saw as the ‘busy work’ of high school, and instead invested themselves in things not perceived as ‘academic’ in some places, like music or the arts—or just reading on their own.” For these students, Backlund tells me, “this option is a ‘twofer’: They get to apply and do what they love—researching and thinking—all at the same time.”
Granted, many stress-addled seniors don’t research for kicks, and for them, the Common Application is a time saver—although how does a young person memorize her own Social Security number nowadays, if she’s not made to write it by hand into seven different applications in a row? Granted, many universities require supplementary essays—such as Tufts’ infamous prompt about #YOLO.
But the Common App, which acts as a clearinghouse for everything from a student’s name and address to her GPA, extracurriculars, and recommendation dossiers, may still strike artsy or angsty students as poorly indicative of what they have to offer. Its emphasis on the usual prestige-suspects also disadvantages students with more eccentric résumés. But for almost every school in the country, there is no alternative. Except at Bard.
The genius in Bard’s method is that while it might be simpler in construction than the Common Application, it is substantially more difficult. Students have 21 essays to choose from, in three subject areas: social science, history, and philosophy; arts and literature; science and mathematics. “The faculty had a lot of fun” designing the essays, Backlund says, “not to be hard, but to be engaging and open, so that the applicants had something real to chew on and show their thinking abilities.” All require substantial amounts of original research (all sources are available on the portal) and close reading. Last year’s questions included this one:
More precisely, the fact that Uranus's movement did not fit what was predicted by the then-current understanding of planetary motion could be explained by the existence of a not-yet-observed planet—and the planet was then observed right where predicted. Suppose that observatories had looked at the indicated position and had not actually found the predicted planet.
What then? What new questions would this outcome pose for the scientific community? How could they test other explanations for the unexpected motion of Uranus?
Personally, I think any kid that manages to write 2,500 words without joking about “the unexpected motion of Uranus” deserves automatic admission—but seriously, that essay doesn’t mess around.
It’s a fascinating hypothetical; it takes into account astronomy, physics, and math, but also the philosophy of science and the thrill of the unknown that research confronts. Plus, a healthy dose of Feynman! I’ve seen this year’s questions released June 2, by creating an application myself that, sheesh, no way will I finish. They are just as tough.
Skeptics have argued that these essays are just another way for privileged students to pay for “help” in their college applications. (I know at least a few unscrupulous and unemployed Ph.D.s who’d be game.) But Backlund assures me Bard has accountability measures: Every incoming freshman takes a three-week workshop in “Language and Thinking” before school officially starts.
“Everyone has to pass L+T to matriculate into the College,” she explains, and at the workshop, faculty will have successful Entrance Exam applicants’ essays in hand. “If, over the three weeks, it becomes obvious there is a real discrepancy” between their admissions essays and their work for L+T, “we can take appropriate action, and the student will not face ‘expulsion,’ as they will not have as yet ‘matriculated’ at the College.”
There are other schools in the U.S.—many, like Bard, elite small colleges—that don’t require the SAT or ACT. St. John’s College (you know, that “Great Books place,” where they all learn geometry in Greek or something) also requires a unique set of essays—but they still require the Common Application and view transcripts. Even my birthplace, Deep Springs College, possibly the most iconoclastic institution of higher learning ever,requires Board scores and transcripts.
But Bard seems to be the sole college of its caliber in the United States to give students the option to absolutely blow their high-school classes and still have a chance to be great in college. Since I find it preposterous to determine a young person’s entire future based on her choices as a 14-year-old, I couldn’t be happier that the BEE, as Backlund puts it, subverts the “the mad doggie-tail chase created by U.S. News and the Common App.” (However, Backlund does not begrudge any student who wishes to use the Common App: Bard still accepts it, after all.)
But why don’t more colleges offer alternatives to the traditional application—alternatives that acknowledge that many promising young people simply crash in high school because their lives are messes, or their families are falling apart, or because they just plain hate it? And why stop with essays (which, face it, are not the cup of tea of plenty of kids who might still excel in college)?
What about submitting a spectacular and original science or math project? The detailed business model of a company you invented? I wish more American colleges and universities would stop asking students to jump through a series of increasingly privilege-reifying hoops (the current admissions process favors higher-income students) and start asking for applicants to show theirreal potential.
So, eccentric 15-year-olds of America: Keep skipping class to paint graffiti murals on the side of the abandoned White Castle! Quit that insincere volunteer work and get the part-time job you really need! And when some authoritarian tells you you’ll never get into a good college with behavior like that? You tell him you’re going to Bard.
Ideally, multiple-choice exams would be random, without patterns of right or wrong answers. However, all tests are written by humans, and human nature makes it impossible for any test to be truly random.
Because of this fundamental flaw, William Poundstone, author of "Rock Breaks Scissors: A Practical Guide to Outguessing and Outwitting Almost Everybody," claims to have found several common patterns in multiple-choice tests, including computer-randomized exams like the SATs.
After examining 100 tests — 2,456 questions in total — from varied sources, including middle school, high school, college, and professional school exams; drivers' tests; licensing exams for firefighters and radio operators; and even newspaper quizzes, Poundstone says he found statistical patterns across all sources.
From this data, he determined valuable strategies for how to greatly up your chances of guessing correctly on any exam, whether you're stumbling through a chemistry final or retaking your driver's test.
While Poundstone emphasizes that actual knowledge of the subject matter is always the best test-taking strategy and that "a guessing strategy is useful to the extent that it beats random guessing," he suggests to always guess when you're unsure. And guessing smartly will only improve your chances of being correct.
Here are a few of Poundstone's tactics for outsmarting any multiple-choice test:
First, ignore conventional wisdom.
You've probably been given test-taking advice along the lines of "always guess the middle answer if you don't know," or "avoid any answer that uses the words never, always, all, or none," at some point in your life. However, according to Poundstone, this conventional wisdom doesn't hold up against statistics. In fact, he found that the answers "none of the above" or "all of the above" were correct 52% of the time. Choosing one of these answers gives you a 90% improvement over random guessing.
Look at the surrounding answers.
Poundstone found correct answer choices hardly repeated consecutively, so looking at the answers of the questions you do know will help you figure out the ones you're stuck on. For example, if you're stuck on question No. 2, but know that the answer to No. 1 is A and the answer to No. 3 is D, those choices can probably be eliminated for No. 2. Of course, "knowledge trumps outguessing," Poundstone reminds us. Cross out answers you know are wrong based on facts first.
Choose the longest answer.
Poundstone also noticed that the longest answer on multiple-choice tests was usually correct. "Test makers have to make sure that right answers are indisputably right," he says. "Often this demands some qualifying language. They may not try so hard with wrong answers." If one choice is noticeably longer than its counterparts, it's likely the correct answer.
Eliminate the outliers.
Some exams, like the SATs, are randomized using computers, negating any patterns usually found in the order of the answers. However, no matter their order, answer choices that are incongruent with the rest are usually wrong, according to Poundstone. He gives the following sample answers from an SAT practice test, without including the question:
Because the meaning of "gradual" stands out from the other words in the right column, choice E can be eliminated. Poundstone then points out that "haphazard" and "improvised," have almost identical meanings. Because these choices are so close in meaning, A and C can also be eliminated, allowing you to narrow down over half the answers without even reading the question. "It's hard to see how one could be unambiguously correct and the other unambiguously wrong," he says. For the record, the correct answer is D.
High school students across the country compete each year for spots at the most selective universities. They need high GPAs and well-rounded extracurricular activities on their resumes, but a stand-out SAT score is crucial to get into a top school, and could even make or break an application.
We compiled a list of the 25 colleges with the highest SAT scores. This 2013-2014 preliminary data came from the National Center for Education Statistics (NCES). The NCES data gave us the 25th and 27th percentiles for each test section, which we then averaged and added up to get the average overall score for each school.
25. (TIE) Carleton College
Average SAT Score: 2135
Critical Reading: 705
Carleton College is a highly ranked liberal arts school in Northfield, Minnesota, with about 2,000 undergraduate students.
25. (TIE) Carnegie Mellon University
Average SAT Score: 2135
Critical Reading: 690
Carnegie Mellon has an urban campus located in Pittsburgh, Pennsylvania, and accepts just 27.8% of its applicants.
23. (TIE) Amherst College
Average SAT Score: 2155
Critical Reading: 715
Amherst College in Amherst, Massachusetts, is ranked No. 2 among liberal arts colleges in the U.S., according to U.S. News.
See the rest of the story at Business Insider