Our exams have been independently tested and verified by Dr. Lawrence Rudner, the former chief psychometrician of GMAC (ie, the person in charge of the GMAT's algorithm!). You can read what he said here:
https://www.manhattanprep.com/gmat/blog ... -endorsed/One quote from Dr. Rudner:
"I can further attest that Manhattan Prep’s GMAT practice exams do an excellent job of predicting a student’s score on the actual GMAT examination. "
The *scores* are as accurate as we can make them and that is relative to the real exam. Of course, the full pool of GMAT test takers is not also taking our exam, but we calibrate our scoring algorithm based on our students' scores on the real exam (vs. their performance on their last exam in our system). Since the real exam scores are based on the full pool of GMAT test takers, we are calibrating against the full pool. (Also, we do not assign our own percentiles. We use the GMAC percentiles that are associated with specific scoring levels. GMAT scores themselves are correlated / calibrated to specific skill levels—those do not change over time. Only the percentiles associated with specific skill levels change over time—ie, the percentage of test-takers capable of scoring at a specific level changes over time.)
Several things to note:
(1) All standardized tests have a standard deviation (meaning: your scoring ability isn't just a single score on the spectrum, but a range of possible scores)—and this deviation is wider than most people expect. The real test has a standard deviation of about 30 points (and the real test, of course, has a narrower standard deviation than any independent practice test). Our practice tests have a standard deviation of about 50 points (I don't know how that compares to other test prep companies' SDs as we're the only company I've seen publish our SD).
(2) Our tests also have an average "skew" of less than 10 points (ie, if you look across *all* scores for all of our students and compare to the real test, the average skew from last MPrep to real test is less than 10 points). Of course, because of the SD above, among other factors, individual performance can vary much more widely.
(3) I do think that our quant sections feel harder than the real thing and I think there are two reasons for that.
First, our test does not include any experimental questions, while the real test includes somewhere between 5 and 10 experimentals in the quant section. (They don't disclose. My best guess: 7-ish.) Experimentals can come at any difficulty level (the whole point is that they have not yet been assigned an official difficulty level). The higher you are scoring, therefore, the more likely you are to see an experimental that is below your ability level—so you get some "freebies" along the way that make the test feel easier. But it's not really—since those questions didn't count.
Second, I think that some of our quant problems do require more time and more calculation than average real-test problems—but we haven't changed this aspect on purpose. I'd rather have you overtrain / feel that the real test was a little easier than the reverse.
So our test (especially our quant section) may
feel harder to some...but the scores themselves are still as accurate as we can make them.