- RSS Channel Showcase 4696434
- RSS Channel Showcase 7607082
- RSS Channel Showcase 4751239
- RSS Channel Showcase 4962646
Articles on this Page
- 09/29/11--08:23: _The Long, Troubled ...
- 09/30/11--09:07: _Don't Make These Ei...
- 11/04/11--06:19: _Chinese Now Know Ev...
- 11/28/11--10:00: _MONEY, PATENTS & TE...
- 01/31/12--15:34: _An Elite College Fu...
- 02/21/12--12:20: _Mom & Dad's Income ...
- 02/26/12--13:46: _Sorry Parents, Gett...
- 03/27/12--10:42: _It's About To Get A...
- 04/23/12--12:38: _These SAT Scores Sh...
- 05/01/12--12:06: _More Proof That Tes...
- 05/21/12--11:54: _These Are The 10 Be...
- 06/06/12--14:56: _A Special SAT Test ...
- 07/03/12--14:08: _How One Successful ...
- 07/27/12--12:28: _Too Many Students A...
- 12/13/12--07:43: _Columbia Student Wr...
- 01/28/13--16:29: _Bucknell University...
- 02/27/13--11:15: _Bill Ackman Almost ...
- 04/01/13--08:56: _The 25 Colleges Wit...
- 08/28/13--14:35: _Can You Answer Ques...
- 09/25/13--21:01: _These Are The Colle...
- 09/29/11--08:23: The Long, Troubled History Of Cheating On The SAT
- 09/30/11--09:07: Don't Make These Eight Common College Application Mistakes
- 90 percent of recommendations are fake
- 70 percent of essays are written by someone else
- 50 percent of transcripts are fabricated
- 30 percent of financial aid applications contain lies
- 10 percent of awards/achievements are fake
- 01/31/12--15:34: An Elite College Fudged SAT Scores To Boost National Ranking
- 03/27/12--10:42: It's About To Get A Whole Lot Tougher To Cheat On The SATs
- 04/23/12--12:38: These SAT Scores Show That We Really Are Getting Smarter
- Scores improved across gender
- Scores improved primarily in the math sections, which indicate that problem-solving ability has improved
- Improved education and nutrition
- "More cognitive stimulation arising from the greater complexity of more recent environments, for example, the broad exposure to television and video games"
- 05/21/12--11:54: These Are The 10 Best Public High Schools In America
- 07/27/12--12:28: Too Many Students Are Getting Perfect Scores On The SAT
- 12/13/12--07:43: Columbia Student Writes About How She 'Bought' Her Way Into College
- 01/28/13--16:29: Bucknell University Inflated SAT Scores For Years
- 02/27/13--11:15: Bill Ackman Almost Lost His Bar Mitzvah Money Betting On The SATs
- 04/01/13--08:56: The 25 Colleges With The Highest SAT Scores
- 08/28/13--14:35: Can You Answer Questions From The New College Exit Exam?
- 09/25/13--21:01: These Are The College Majors With The Smartest Students
When seven teenagers from tony Great Neck, N.Y. were arrested Tuesday for cheating on the SATs, initial reactions from knowledgeable observers were reminiscent of Claude Rains’ Captain Renault in Casablanca: “I’m shocked, shocked to find that gambling is going on in here!”
Indeed, cheating on the SATs is not widespread. But it does have a long and sordid history that began long before the Great Neck kids (now charged with misdemeanors) allegedly hired an Emory University whiz kid to take the test for them using fake IDs.
It is estimated that every year, the College Board flags 2,500 cases of scores that look suspect. After further investigation, approximately 1,000 of those cases result in the students’ test scores being “withheld.” (As policy, the Board doesn’t accuse kids of cheating; it just refuses to send the suspect scores to colleges—at least not without a giant scarlet asterisk.)
Tom Ewing, a spokesperson for ETS, the company that designs, administers, and ensures test security of the SAT, could only recall two other situations involving impersonation over the last 20 years. Both resulted in criminal convictions, including one count of perjury when the impersonator testified.
But 2,500 suspected cheats out of 2.25 million kids who take the test annually is a pretty insignificant percentage: one-tenth of 1 percent. So why do seven kids from Long Island garner so much media attention?
Sam Eshaghoff (above) is led from Nassau Police Headquarters. Two of the students whom he allegedly helped were not identified by name., ABC News
“College admission is enormously stressful and competitive,” says Scott Farber, the founder and CEO of A-List, an SAT-tutoring company that counts more than a few Great Neck students among his clients. “These kids work incredibly hard: getting good grades in their classes, improving their SAT scores, and excelling in their extracurricular activities. Sometimes the pressure gets to be too much and one or two go off the deep end.”
“There are two types of cheating,” says Bob Schaeffer, the public education director of Fairtest, a non-profit group that has successfully challenged the fairness and validity of the SAT and the ACT. “What we saw happen in Great Neck—impersonation—is the less common of the two. The more common form of cheating is collaboration.”
Schaeffer explains that in the 1990s and into the early 2000s, impersonation was a bigger problem among athletes who needed to get a certain SAT score in order to qualify for athletic scholarships. More than a handful of these college hopefuls were aided by unscrupulous coaches or agents.
“Ironically, it was sometimes a competing coach from a college that was unsuccessful in recruiting the player who blew the whistle.”
Scott Gomer, a spokesman for the College Board’s arch-rival, the ACT, refused to specify how many test scores his organization suspects are less than kosher. But the once much-smaller ACT now rivals the College Board in terms of the number of high school kids who take the dreaded exams every year. The ACT’s Gomer confirmed that “external” whistleblowers were one of the main sources of flagging suspected cheaters. That group includes not just jilted coaches, but the NCAA compliance people (when they’re not chasing down boosters and jersey-selling players), college admission officers, and high school guidance counselors as well.
But it’s on the collaboration front where the College Board’s and ACT’s high-tech cheating-detection tools turn catching cheats into an episode of SAT: SVU.
The ACT’s Gomer referred to these sophisticated tools as “internal audits.” Fairtest’s Schaeffer was much more forthcoming: “First they look to see if there is a change in a student’s score from an earlier test. What typically triggers an inquiry is a 350-point shift in the combined SAT math and critical reading, or a 250-point change in either test.”
“They then look at handwriting,” he continues. “Every test-taker has to copy a few sentences—in their own handwriting—saying they are not going to cheat. This is not an honor code, but it does create a handwriting sample that can be compared later on.”
“But the most sophisticated tool,” says Schaeffer, “is an algorithm that compares answers of kids sitting near each other during the test. The program looks for patterns of answers answered incorrectly. Correct answers will all look alike. But incorrect answers should have different patterns. When kids are collaborating, the wrong answers follow an identical pattern. And the algorithm sniffs it out.”
Geographical dispersion also presents cheating opportunities. A number of years ago the FBI shut down an East Coast cheating ring that provided questions and answers to West Coast test-takers for a hefty price. The problem is now exacerbated with the soaring number of international kids taking the standardized tests and seeking admission to American colleges and universities. (The SAT alone is offered six times a year abroad at more than 6,000 test sites in 170 countries.) And a quick search on the Internet revealed two YouTube videos spelling out how to cheat.
Cheating is not unknown abroad. Last year a lecturer at a Korean “cram school” was arrested after taking the SAT in Seoul and then emailing the questions to two Korean students studying in Connecticut. Because of the time difference, the two students were able to research the answers. They scored 2,250 and 2,210 points each, out of a perfect 2,400—much higher than their previous attempts—which triggered an audit.
Last year a Korean lecturer was arrested after taking the SAT in Seoul and then emailing the questions to two Korean students studying in Connecticut.
Interestingly, opposition to the standardized tests is growing in many corners. More than 775 colleges and universities now have a test-optional application process. If high school kids don’t want to take the SAT or ACT they don’t have to; the colleges will makes admission decisions solely on grades, essays, recommendations, and sometimes interviews.
Some critics, like Fairtest’s Schaeffer put it this way: “The real question is why so many colleges still rely on tests that are subject to such manipulation.” A-List’s Farber agrees. “The real issue for standardized tests and the college application process is the fundamentally unequal access to resources. We work with schools, educational non-profits, and community-based organizations across the country in an effort to level the playing field. This country cannot succeed if we continue to relegate students to second-class status simply because they don’t have the same opportunities."
Unfortunately, for more than a million kids, that debate is taking too long—the first SAT exam of the school year takes place this Saturday. For the mostly high school seniors taking it, it is their last chance to shine as they scramble to complete their college applications.
So how did the Great Neck kids get caught? High school officials were suspicious of at least one of the students’ high scores; they weren’t reflective of the kid’s overall academic performance. So they called in ETS, which conducted an investigation that lasted several months. In the end, the case was referred to the Nassau County District Attorney, and the jig was up.
Thanks to the Internet and perhaps even the common application, which is accepted at hundreds of educational institutions nationwide, applying to college has gotten easier for students. But actually getting accepted to a school is much harder these days.
"You've got more kids applying to more schools, thinking this will improve their odds," says Steve Cohen, co-author of the book Getting In! The Zinch Guide to College Admissions and Financial Aid in the Digital Age. "But most kids hurt their chances of getting in without even realizing it."
To help students improve their chances of being accepted to their first-choice schools, we asked experts to outline what others have done wrong in the past.
Treating every application the same
While more and more colleges and universities are allowing students to submit the common application, most still require students to submit supplemental information or answer additional essay questions as well. Other schools still prefer to use their own application.
Whatever the case, students should tackle each application or supplement separately, since answers that might get them into Brown University might not net them acceptance into Harvard (or vice versa). Also, similarly to how a cover letter should be tailored to the company you are looking to work for, students shouldn't recycle essays or answers to other application questions that don't illustrate why they are a good fit at that particular college.
"You're always going to have a better shot of getting into a school if you make a personal connection with the admissions office," says Craig Meister, former admissions officer and president of Tactical College Consulting. "The people who want to do the least end up with least acceptance letters."
Forgetting to proofread your essay
Maybe it's because they're so focused on coming up for a good answer to the question or because they're so overloaded with work senior year, but many students submit their college essays without proofreading them.
According to Jennifer Louden, senior associate director of undergraduate admissions at Loyola University in Maryland, while punctuation and spelling mistakes abound, it's even more common for an essay to arrive completely devoid of capital letters.
"[Students] write it as if they were sending a text message," she says. It's important to give your essay a good combing over before sending it out to the admissions office.
Fast-forwarding through the directions
Another telltale sign that your application was more or less a rush job is to not answer a question the way a college is requesting it be answered.
For instance, Meister cites a question on the common application that asks students to list their current course load by formal title and credit value. If the student only lists the course title or simply describes it by its generic name, they are illustrating that they aren't detailed-oriented or that they don't care enough to read through the entire application before submitting it.
"Colleges are looking for reasons not to accept you," Meister says, and incomplete or faulty responses could easily be one of them.
See the rest of the story at Business Insider
The number of Chinese students in American colleges has tripled in the last three years.
Unfortunately, most colleges don't know what they are getting into.
According to New York Times many of these students are arriving with barely passable English, despite dominating the SATs and having impeccable applications.
The secret to getting into college for a Chinese person is paying the right agency. About 80 percent of Chinese applicants use such agents according to a report by Zinch Group.
Aoji Education Group, for example, offers parents a package complete with money-back guarantee if you don't get into one of the five colleges you apply for.
Zinch Group's report also exposed these staggering facts about Chinese applicants:
The agency EIC Group charges between $4,000 and $6,000 for a complete fabricated application package, depending on how good the school you want to get in is. They also take a cut of your financial aid too.
Students might spend another $3,000 to prepare for any standardized tests, that is, if they don't pay for someone else to take it instead.
For under $10,000, Chinese are basically guaranteed enrollment into a U.S. college. Most Chinese students and parents don't care or know about the application process, they just want their child to get a good American education.
But colleges being broken by the economic recession are also loving the influx of money from China's rising middle class.
The trend is slowly forcing American colleges to change their teaching system, hire "covert" admissions agents in China, and beef up their English language programs.
New scientific studies continue to escalate the hard work vs. raw talent debate.
One study from Vanderbilt University shifts things in favor of raw talent. Professors David Lubinski and Camilla Persson Benbow discovered that SAT scores at age 12 are a good indicator of your college major and job in the future.
These charts compare 12 year olds at the bottom end of the top quartile for math with a group at the top end of the top quartile for math; in other words, the lesser math nerds versus the super math nerds.
Compared to lesser math nerds, super math nerds were twice as likely to get a doctorate. 55% of the super math nerds earned income equal to or greater than median, versus 46% (strangely less than average) for lesser math nerds. Super math nerds were twice as likely to earn a patent, and around seven times a likely to earn tenure at a top university.
Needless to say the super math nerds are miles ahead of the average kid.
Males were also more likely to pursue advanced degrees in math and inorganic sciences, while females usually pursued the life sciences and humanities.
But, a person's skills were found to be a much better indicator of their future degree than gender.
Females were more likely to eventually go into multi-disciplinary skills because those with exceptional quantitative skills often also had good verbal skills. For example, this is why many quantitatively-strong females become physicians or study life science rather than engineering.
Unfortunately, most males were typically weaker overall in verbal reasoning. But in math, the top tier was mostly dominated by boys.
The study also went on to measure the spatial ability of the students. Too often, the importance of spatial reasoning is overlooked in talent assessments.
The three scores were graphed three-dimensionally with the y-axis being verbal, x-axis being math, and the size/length of the arrow being spatial score.
Not too surprisingly, students liked and disliked subjects based on their strengths and weaknesses.
An interesting finding was that physical science majors were strong in all three, while business majors were weak in all three measured aspects. Engineers also notably have great spatial ability, while lawyers have the worst spatial ability.
Then they grouped the jobs and majors into three categories -- humanities, science (and math), and other. Predictions were made based on their abilities.
The graph below shows that most kids end up in professions as predicted by their skills. The percentage of correct predictions are shown in bold.
The dashed lines divide the three categories of jobs. The shaded triangle represents the different occupations. The unshaded triangle represents the different college majors.
The SAT scores combined with spatial ability at age 12 can give a good estimate of what college major and job you will end up in. It can also ultimately separate the good from the exceptional.
But, the research authors also noted that students who completed their degrees, regardless of subject, all had similar levels of career satisfaction and life satisfaction.
Claremont McKenna College is currently ranked by US News and World Report as the nation's ninth-best liberal arts college—but, the college admitted yesterday, that ranking is based on falsified SAT scores.
The small California school says that since 2005, "a senior administrator" who has since resigned submitted false scores to US News & World Report and other publications that rank colleges.
Insiders say Richard C. Vos, vice president and dean of admissions, is responsible, the New York Times reports. The scores were only inflated by an average of 10 to 20 points each, and it is unclear whether the revelation will affect any of the school's rankings.
A family's income is a strong indicator of how their child will score on the SAT exam, according to author Daniel Pink.
Pink created a chart of combined SAT score and family income based on data from College Board's 2011 Total Group Profile Report.
Parents who bring home over $200K can expect their kids to score an impressive 1721.
Thinking of spending money on getting your kids commercial SAT test prep?
It's probably not worth it.
The chart is pretty damning: White dots are students who had test prop between the PSAT and SAT. Black dots are students who did have test prep. See a stark difference? Neither do we..
From the article:
When researchers have estimated the effect of commercial test preparation programs on the SAT while taking the above factors into account, the effect of commercial test preparation has appeared relatively small. A comprehensive 1999 study by Don Powers and Don Rock published in the Journal of Educational Measurement estimated a coaching effect on the math section somewhere between 13 and 18 points, and an effect on the verbal section between 6 and 12 points. Powers and Rock concluded that the combined effect of coaching on the SAT I is between 21 and 34 points. Similarly, extensive metanalyses conducted by Betsy Jane Becker in 1990 and by Nan Laird in 1983 found that the typical effect of commercial preparatory courses on the SAT was in the range of 9-25 points on the verbal section, and 15-25 points on the math section.
Security around the SAT and ACT college admissions exams will increase this fall, in an attempt to make sure test-takers are who they actually claim to be.
The increase in security comes on the tail of a cheating scandal involving about a dozen students in Long Island who paid other students $500 to $3,600 to take the tests for them, according to The New York Times.
In one case, a male even took the test for a female student.
Under the new rules, students will have to submit a photo upon applying to take the exam, the College Board announced today. On test day, the roster will have a photo of the student so it can be matched with a student's ID before the exam. The photo will also appear with the student's score results.The changes will be enacted nationwide.
You can read more about the history of cheating on the SATs here.
There's been a lot of debate lately over whether IQ or hard work determines success.
Then there's the "Flynn Effect," which is the theory that worldwide intelligence has increased at the rate of 10 IQ points every 30 years for nearly the past century.
Some new research by Jonathan Wai and Martha Putallaz of Duke University, shows that SAT scores have indeed steadily risen, even among the top 5 percent of scorers. In their paper, "The Flynn Effect Puzzle," Wei and Putallaz looked at about 1.7 million test scores, including SAT and ACT, between 1981 and 2000.
There are a few interesting takeaways from the study:
The researchers also offered some potential reasons for the increase:
The authors concluded that:
"IQ gains extend to every level. This result, along with the finding that the rate of gain in the right tail on the math subtests is the same as in the middle and lower parts of the distribution, illustrates for the first time that it is likely the entire curve that is rising at a remarkably constant rate."
Of course, there could be other factors, too, such as easier SAT and ACT tests — or students are simply better at preparing for standardized tests.
Standardized test measurement errors are twice as bad as reported, lending traction to the argument that they're not a good metric for student achievement.
Teacher effectiveness, knowledge deterioration and school environment can all affect how students perform on the tests, according to the new study by researchers at Stanford University, University of Virginia, University of Albany and the Center of Policy Research.
Current test scores fail to account for day-to-day differences in student achievement, the researchers said. They studied math and language arts scores for New York City middle schoolers.
Accounting for differences in students "can yield meaningful improvements in the precision of student achievement and achievement-gain estimates," the researchers said.
The city's standardized testing practices were also called into question in a recent New York Times column that highlighted two schools in the Bronx. Located only blocks apart, one school received an A on its report card, while the other one received an F.
The scores were based on one group of students answering a few more test questions correctly.
Here's a chart showing the error measured by testing companies (dotted line) compared with what researchers in the study found:
Composite scores were determined by graduation rate, percentage of students accepted to college, test scores and tests offered. More than 2,300 schools across the country were surveyed.
Texas, Arizona and Florida each have two schools on the list, but which school took the crown?
#10 Thomas Jefferson for Science and Technology in Alexandria, VA
This selective school led the top 10 in average SAT and AP scores, but not taking too many APs per student and not taking the ACTs knocked them down the list.
#9 Suncoast Community in Riviera Beach, FL
Suncoast became a magnet school in 1989 in the Palm Beach area and moved into an $80 million campus facility in 2010.
#8 Stanton College Preparatory School in Jacksonville, FL
Named for Edwin Stanton, Abraham Lincoln's Secretary of War, Stanton Prep attracts students from all over Central Florida by a selection and lottery process.
See the rest of the story at Business Insider
The proposed Aug. 3 date would have allowed the gifted—and affluent—students enrolled in the college prep program to take the test at a time with out the academic stress of the normal school year.
The College Board released a statement today saying the summer test date was "inappropriate" and that it would work against the board's mission to give all students an equal chance at taking the test.
Bloggers, editorial writers, and non-profits, such as Fair Test, had blasted the College Board for offering rich kids another advantage of early admittance into college.
The one percent loses this time.
DON'T MISS: The 50 Best Colleges In America >
Angel investor Kai Peter Chang says he cheated his way to a great SAT score – 1510.
You can too, he says – "not on the SATs - cheat the way they are reported."
In an answer to the Quora question "How does one prepare enough to get a perfect score on the SAT?" he explains:
Understanding the SATs, in the Context of College Admissions
Elite Colleges and Universities are in the difficult position of evaluating tens of thousands of hopeful new students to decide which [X] percent of those applicants they will extending offers to. They must do it with limited information and limited manpower (admissions officers) within a very narrow window of time.
SATs are a standardized way to have a numerical representation of your "aptitude" so every applicant from a prep-school blue-blood rich kid from to a rough-and-tumble dropout from the streets of can be ranked on a single (seemingly) objective scale.
One of the things elite colleges desire is "smart" students, and since they cannot administer IQ tests on all applicants, the SATs become the imperfect proxy used to quantify that portion of an applicant's profile.
What does a high (or perfect) SAT score reflect? It's a composite function measuring the test-taker's
1. Native raw intelligence (30%)
2. Reading comprehension ability/speed and general vocabulary (20%)
3. Error-correction skill (ability to spot one's own mistakes and fix them before committing to a seemingly-right answer that is a bait/trap (20%)
4. Ability to game the SAT system itself (related to 3.) (30%)
(The percentages are my estimates as to what each ability/skill contribute to a high score. Each version of the SATs would have differently weighted percentage breakdowns, but the fundamentals remain the same.)
There is little you can do to change 1. so most tips revolve around some combination of improving your 2., 3., and 4. On that front, , , , and offered excellent advice on just that thing. I would only add that a formal SAT tutoring course by folks at the Princeton Review can help a LOT with 3. and 4.
All that said ...
Not all High SAT Scores are Equal
Several of the respondents have reported taking the SATs multiple times - and indeed, multiple attempts at the test is one of the most sure-fire way to hit a high or perfect score. Repetition of the drill, familiarity with the SAT's protocol, reducing the anxiety through multiple exposure to an otherwise-stressful situation all help you perform at your personal best.
Imagine your personal composite "aptitude" which is a Function of the above-mentioned 1. 2. 3. and 4.
F (1., 2., 3., 4.) = SAT Score
Once you've maxed out your 2. 3. and 4., multiple attempts at the SATs will, over time, reveal a scatter diagram around your average composite 1. 2. 3. and 4.aptitude, with scores above, below and heavily clustered around your "true" ability.
Every college will tell you they only consider your highest SAT score to evaluate your application. This, like most things large organizations tell you, is a lie.
Why would they throw away all that useful data you so generously provided for them?
Indeed, if you made 20 attempts at the SATs and averaged, say, 1440 and peaked once at 1550, they will view that high score at the tail as the outlier that it is, and discount it accordingly. Far as they are concerned, you are a 1440 guy who got lucky once, not a 1550 guy.
How to Cheat the SATs
I understood at age 15 that a single, solitary high SAT score was far more impressive than multiple attempts at it.
At the time (this may have changed), the ETS tracked your SAT-taking history through your Social Security Number. With this in mind, every time I took the SATs between Sophomore and Junior year, I deliberately wrote my SSN off by one digit.
When I finally got the score I wanted (1510 in my case), I called up ETS and raised hell, telling them they screwed up my SSN and demanding that they correct it to my true SSN.
"Oh, we are so sorry Mr. Chang. We will fix that for you immediately. Please accept our apologies ..."
Consequently, only one high score was attached to my true SSN, and it became the basis for my applications.
Thus, when I applied to colleges, I gave them only one data point that *I* knew to be an anomaly, but they were forced to accept the 1510 as representative of my true "aptitude" since they were denied any other data points.
ETS may have changed their policy since then, but if not, it works like a charm.
The rest is up to you ...
Every year, hundreds of thousands of high school seniors with nearly impeccable academic
records submit their applications to highly selective colleges. And every year, the admissions
officers at these schools have to find a way to decide how to allocate the limited number of seats in each of their freshman classes.
How do they do it?
For just about every highly selective school, the major selection criteria are a student’s SAT scores, high school grade point average, the difficulty of coursework, and extracurricular
participation. Each school emphasizes different measurements depending upon its institutional focus; however, there remains one constant that plays a very large role in admissions: the SAT.
"Tens of thousands of students every year who are in direct competition for the slots at the nation's most elite universities are likely in danger that the SAT will not capture the true level of their academic ability."
Admissions officers at schools like Harvard, Princeton, Stanford, and Yale will tell you that there’s an issue: The vast majority of students whose applications they review have perfect or near-perfect GPAs and SAT scores, so these metrics can’t be used to distinguish between the very best candidates. This means that other yardsticks—such as a student’s involvement in extracurricular activities—have become, by default, much more important because the objective academic metrics don’t have enough headroom.
Every year, over 200,000 intellectually talented 7th graders from across the country take the SAT, which is designed for the average 11th grader, to distinguish the academically tall from the academically giant. By the time those students get to the 11th grade, a majority of them will likely reach within 100 to 200 points of a perfect score. But this is simply because the test is not challenging enough for them.
Today, a perfect score on the SAT is 2400. A score of 3000 or 4000 is not currently possible, but that is because the test is simply not hard enough to measure a score that high. But if the test were more difficult, who’s to say that some of these talented students might not be able to achieve a higher score?
One way to solve this problem would be for the Educational Testing Service to design a harder SAT, and for all we know, something like this is already in the works. But for the purposes of selective college admissions, I offer a much simpler and more pragmatic solution for the short term: Highly selective colleges should require the GRE—or another graduate-school admissions exam—instead of the SAT as a measurement of academic aptitude. This is because the GRE is essentially just a harder SAT.
Tens of thousands of students every year who are in direct competition for the slots at the nation’s most elite universities are likely in danger that the SAT will not capture the true level of their academic ability. This can put them at a disadvantage in the college-admissions process.
Of course, one could argue that even these graduate-admissions exams wouldn’t have enough headroom for the most talented students. But if selective colleges required a test that were at least more difficult than the SAT, it would likely reduce the problem.
This would ease the dilemma of admissions officers seeing a perfect 2400 on the SAT and not knowing whether that student has the academic potential to exceed the demands of the test.
If talented high school students took a harder test, it could also have a secondary effect: teaching them a greater sense of humility at a critical moment in their lives.
She believes this is true for many of the kids who get into Ivy League schools and that's "massively unfair," she explains.
"I bought myself a higher score because my family could afford to, and many of my peers at Columbia did the same. We like to think we're all here because we earned it. But many of us are here because we could pay the price of admission," Buccino writes.
Here's the meat of her argument from her op-ed:
Admission into the Ivy League and other top schools is also considered to be meritocratic. A major part of a student's application is his or her SAT score. Admissions officers use this "standardized" test to compare students from different backgrounds against each other.
But in practice, the SAT is far from standardized. Many high schoolers take prep classes that teach not actual knowledge but SAT-specific tricks. Some of these classes, like Princeton Review's SAT Honors prep class, can cost roughly $2,000.
She writes that Ivy League schools should stop focusing so much on SAT scores for admission.
Bucknell University admitted that it inflated its freshman SAT scores for seven years.
From 2006-2012, Bucknell omitted some SAT scores which were mostly lower than what was reported.
As a result, average scores from Bucknell were 16 points higher than the real average scores.
Bucknell University president John C. Bravman revealed the shady practices in a statement to the board of trustees.
"Enrollment management leadership no longer with the University prepared these inaccurate numbers," he wrote. "That leadership reported the inaccurate numbers to the Board and to other officers of the University, internal offices and governance committees, and posted the inaccurate numbers on the University website."
I am disappointed to report that when calculating the SAT scores (math and critical reading) of the classes entering the University from 2006 through 2012, the University omitted from the calculation the SAT scores of a number of students. Some of these omitted scores were higher than the SAT scores that the University reported, but most were lower. Meanwhile, for several of the years in which errant SAT data were reported, the University reported ACT scores for the entering classes that were actually one point lower than the correct figures.
The outcome of all these errors was that our SAT scores across each of the seven years were reported to various organizations, most notably this Board, as being higher than they actually were. Specifically, during each of those seven years, the scores of 13 to 47 students were omitted from the SAT calculation, with the result being that our mean scores were reported to be 7 to 25 points higher than they actually were on the 1600-point scale. During those seven years of misreported data, on average 32 students per year were omitted from the reports and our mean SAT scores were on average reported to be 16 points higher than they actually were.
Bravman insisted that Bucknell is fixing all the historical mistakes and has scheduled a call with U.S. News & World Report – which does one of the biggest college rankings out there — to make corrections.
It takes some serious stones to tell the world you think a company's stock is going to zero.
The story is called "The Big Short War" (by William Cohan) and it opens with an anecdote about Ackman's first big bet. He was in high school and he bet his father all his Bar Mitzvah money (kudos to him for still having it) that he could get a perfect score on the verbal.
In 1984, when he was a junior at Horace Greeley High School, in affluent Chappaqua, New York, he wagered his father $2,000 that he would score a perfect 800 on the verbal section of the S.A.T. The gamble was everything Ackman had saved up from his Bar Mitzvah gift money and his allowance for doing household chores. “I was a little bit of a cocky kid,” he admits, with uncharacteristic understatement.
Tall, athletic, handsome with cerulean eyes, he was the kind of hyper-ambitious kid other kids loved to hate and just the type to make a big wager with no margin for error. But on the night before the S.A.T., his father took pity on him and canceled the bet. “I would’ve lost it,” Ackman concedes. He got a 780 on the verbal and a 750 on the math. “One wrong on the verbal, three wrong on the math,” he muses. “I’m still convinced some of the questions were wrong.”
We've already forgotten our SAT scores. Willingly.
The college admissions process is extremely competitive at each school, with only a small percentage of applicants receiving acceptance letters.
One major factor that universities use to evaluate students is the SAT score.
A perfect SAT score will not guarantee a student admission to a top school because it is only one aspect of a larger application that includes coursework, recommendations, and extracurricular activities. But a great score will certainly help.
CollegeBoard publishes the mid-50 percent SAT scores of incoming freshmen for each of the three sections — critical reading, math, and writing — for each college.
For example, a range of 710-780 means that 50 percent of the students scored in that range, with 25 percent scoring above, and 25 percent scoring below that range.
We took the average of the ranges and ranked the schools based on the highest average SAT score.
#25 University of Notre Dame (Tie)
Average SAT Score: 2130
Critical Reading: 705
Notre Dame, in South Bend, Indiana, is known for its football team, but its students are smart too.
#25 Haverford College (Tie)
Average SAT Score: 2130
Critical Reading: 705
Haverford College is located in Haverford, Pennsylvania, just outside of Philadelphia.
#24 Carleton College
Average SAT Score: 2135
Critical Reading: 715
Carleton is located in Northfield, Minnesota, and was ranked the eighth best liberal arts college by U.S. News.
See the rest of the story at Business Insider
Next spring, nearly 300 colleges and universities nationwide are offering an optional 90-minute exit exam to test students on skills such as critical thinking, problem solving, and writing ability.
The test is meant to evaluate the "soft skills" that top employers have started demanding.
This isn't the first time that colleges are offering this SAT-like Collegiate Learning Assessment (CLA), but it is the first time that students will be given their individual scores. Students are encouraged to share their scores with potential employers as proof they're capable of careful, thoughtful analysis, according to Chris Jackson, director of partner development for the CLA. In the past, the scores were used solely for metrics within the college system.
"These tests are becoming increasingly necessary," Jackson tells Business Insider. "Students might come out of college with a specific major, but employers want to know if they have transferable skills, meaning the skills necessary to change jobs and careers without having to be completely retrained."
The test requires students to analyze evidence from different sources and distinguish rational from emotional arguments and fact from opinion by answering open-ended questions, which are typically hypothetical but realistic situations.
"The test is looking specifically to see if students can recognize potential for bias or logical fallacy in a document," Jackson says. Students will not do well on this exam if they tend to accept information as it is and not think critically about it. On the other hand, those who score well are able to state their positions with supporting evidence.
What happens if students score poorly on the exam?
Jackson says it doesn't really affect them because the test is optional. They are not required to show employers their assessment, but if they do, it will work to their advantage. Furthermore, test-takers will also receive subscores in each category assessed. Even if they receive a poor overall score, they may still excel in certain soft skills.
The exam costs $35, but many institutions will foot the bill to get their students to take the test.
Below, Jackson has provided us some sample questions along with materials that resemble the CLA's format:
You are a staff member who works for an organization that provides analysis of policy claims made by political candidates. The organization is non-partisan, meaning that it is not influenced by, affiliated with, or supportive of any one political party or candidate. Pat Stone is running for reelection as the mayor of Jefferson, a city in the state of Columbia. Mayor Stone’s opponent in this contest is Dr. Jamie Eager. Dr. Eager is a member of the Jefferson City Council.
Dr. Eager made three arguments during a recent TV interview: First, Dr. Eager said that Mayor Stone’s proposal for reducing crime by increasing the number of police officers is a bad idea. Dr. Eager said “it will only lead to more crime.” Dr. Eager supported this argument with a chart that shows that counties with a relatively large number of police officers per resident tend to have more crime than those with fewer officers per resident. Second, Dr. Eager said “we should take the money that would have gone to hiring more police officers and spend it on the Strive drug treatment program.”
Dr. Eager supported this argument by referring to a news release by the Washington Institute for Social Research that describes the effectiveness of the Strive drug treatment program. Dr. Eager also said there were other scientific studies that showed the Strive program was effective. Third, Dr. Eager said that because of the strong correlation between drug use and crime in Jefferson, reducing the number of addicts would lower the city’s crime rate. To support this argument, Dr. Eager presented a chart that compared the percentage of drug addicts in a Jefferson ZIP Code area to the number of crimes committed in that area. Dr. Eager based this chart on crime and community data tables that were provided by the Jefferson Police Department.
In advance of the debate later this week, your office must release a report evaluating the claims made by Dr. Eager. You have collected the attached information, and your supervisor asks you to spend the next ninety minutes to review these documents and prepare a memo to the senior staff in response to the three sets of questions (on the next page). (In this case, a memo is an internal document that is concise and comprehensive; there is an example of a memo in the set of materials provided to you.)
ACCOMPANIED MATERIALS FOR SCENARIO 1
QUESTION 1: Dr. Eager claims that Mayor Stone’s proposal “will only lead to more crime.” (Document E contains the chart Dr. Eager used to support this claim.) What are the strengths and/or limitations of Dr. Eager’s position on this matter? Based on the evidence, what conclusion should be drawn about Dr. Eager’s claim? Why? What specific information in the documents led you to this conclusion?
QUESTION 2: Dr. Eager claims that “we should take the money that would have gone to hiring more police officers and spend it on the Strive drug treatment program.” What are the strengths and/or limitations of Dr. Eager’s position on this matter? Based on the evidence, what conclusion should be drawn about Dr. Eager’s claim? Why? Is there a better solution, and if so, what are its strengths and/or limitations? Be sure to cite the information in the documents as well as any other factors you considered (such as the quality of the research conducted on various drug treatment programs) that led you to this conclusion.
QUESTION 3: Dr. Eager claims that “reducing the number of addicts would lower the city’s crime rate.” (Documents C and F exhibit the charts Dr. Eager used to support this statement.) What are the strengths and/or limitations of Dr. Eager’s position on this matter? Based on the evidence, what conclusion should be drawn about Dr. Eager’s claim? Why? What specific information in the documents and any other factors you considered led you to this conclusion?
EXAMPLE HIGH-QUALITY RESPONSE FOR QUESTION 1: I do not agree with Dr. Eager’s claim that Mayor Stone’s proposal for reducing crime “will only lead to more crime.” His only support for the claim hinges on the document 6 chart that shows a weak correlation between the number of police officers per 1000 residents and the number of robberies and burglaries per 1000 residents. However, Dr. Eager is mistaking correlation for causation and failing to understand the alternate explanations for such a correlation. More than likely higher volumes of robberies and burglaries per 1000 residents are occurring in concentrated urban areas or poorer neighborhoods with crime problems.
As a result more officers will naturally be allocated to these areas rather than to other areas with low crime rates. However, that does not mean that the increase in police officers in these areas is causing the extra crime. By only observing correlation and not examining the underlying circumstances, Dr. Eager is being shortsighted in his analysis. If anything the problem is that even though more police officers have been allocated to high crime areas, these problem areas still simply do not have enough police personnel to adequately deal with the problems. As such Mayor Stone’s proposal possesses merit that Dr. Eager’s claims fail to observe.
Dr. Greer claims that “reducing cell phone usage while driving motorized vehicles would lower the city’s vehicle-related accident rate.” (Document B exhibits the chart Dr. Greer used to support this statement.)
ACCOMPANIED MATERIALS FOR SCENARIO 2:
QUESTION: What are the strengths and/or limitations of Dr. Greer’s position on this matter? What specific information in Documents A and B led you to this conclusion? What additional information, if any, would you like to have had?
EXAMPLE HIGH-QUALITY RESPONSE: I cannot agree with Dr. Greer that “reducing cell phone usage while driving motorized vehicles would lower the city’s vehicle-related accident rate.” Dr. Greer’s strategy of looking for root causes of vehicle related accidents is a good one, but cell phone use while driving may not be the primary cause of vehicular accidents in Stoneville.
The chart he showed in his TV interview (Document B) seems to show that vehicle-related accidents increase along with the percent of registered drivers using cell phones while driving. However, Dr. Greer is either misunderstanding the information he gathered from Document A to create his chart, or he is misleading the public. What his chart (Document B) does not show is the population of each region.
Therefore, the chart ends up comparing a number with a percent, which is not meaningful. Dr. Greer is correct in saying that the number of vehicle-related accidents increases with the total number of registered drivers living in each region, but he fails to consider number of accidents per 1,000 drivers. When I look at the tables provided by the police department (Document A), I can see that the number of vehicle-related accidents per 1,000 drivers stays relatively constant regardless of the percentage of drivers using cell phones while operating a motorized vehicle.
You would expect the region with 1% cell phone users while driving and the one with 10% to have very different vehicular accident rates, but in fact, they are the same at 8.59. This suggests that reducing cell phone use while driving a motorized vehicle may not affect the vehicular accident rate at all.
There are many things that cause vehicle-related accidents. The North region has 5% of cell phone users while operating a motorized vehicle, but a noticeably higher vehicular accident rate of 9.04%, so it leads one to wonder what is going on in this region. It would be wise to examine this region to get an idea of all the other possibilities that may exist for vehicular accidents.
College students who major in physical sciences tend to be smarter than those who major in psychology, judging from the 2013 SAT Report on College & Career Readiness from the College Board.
The report includes data on SAT scores by intended major for the college-bound class of 2013. We've taken the liberty of ranking majors based on these scores, which are the best commonly available proxy for intelligence. Although the results are somewhat limited, seeing as different majors require different skills and anyway the SAT is not a perfect test, some notable trends emerge.
Multi/interdisciplinary studies students have the highest combined scores, which makes sense since they need a wide array of talents and are likely to be either ambitious double majors or intellectual scholars.
Physical science majors (physics, chemistry, etc.) have the second highest combined scores; naturally they tend to be serious students, bravely embarking into fields where B.S. is not an option.
A large group of undecided students beat out the similarly ruminative Philosophy majors by one point, claiming the tenth highest combined score.
As we go down the ranking of combined scores, there are increasingly technical majors — these students aren't wasting their time with a liberal arts education.
Here's a ranking by combined scores:
Now, a ranking of majors by critical reading score. Naturally, the humanities subjects rise higher, but check out that still-impressive score for physical science majors:
Next, a ranking of majors by math scores— and the winner is ... math majors.
Finally, a ranking of majors by writing score. Multi/interdisciplinary scores win here too.
Now what happens when we add attractiveness to the equation? There's a strange relationship between attractiveness and intelligence among professors