hi --
direstraits007 Wrote:Test 1 (Free): 720 (Q50/V38)
Test 2: 680 (Q49/V34)
Test 3: 740 (Q49/V42)
Test 4: 680 (Q48/V35)
Test 5: 770 (Q51/V44)
Test 6: 760 (Q51/V42)
those are some serious scores. nice job.
So, How close are the MGMAT Scores, when we compare them to the real GMAT scores? I know our instructors would be having fair enough data handy, which can rightaway relate the MGMAT scores to the real GMAT score with a potential delta. So, please let me know where I stand and if I consider all other factors same then how much would be my range of score in the GMAT ?
nah, we actually don't track statistics like that -- there are just too many variables that could invalidate any data we might collect.
the four most important variables that would render those data suspicious, if we were to go to the effort of collecting them, are:
(1) response bias: the only official scores we're going to hear are those from students who actively choose to come back and report them to us. as you might imagine, students with middle-of-the-road results aren't normally going to be motivated to do so. as with pretty much any other self-reported survey, we would receive disproportionate numbers of extreme results -- both from students who were elated with a performance beyond their wildest dreams, and also from students who were disappointed in their performance. as a result, we would probably wind up with some sort of bimodal distribution that would in no way reflect the true nature of our students' performances.
(2) confounding variables in test administration: we would basically have to trust that our students had taken their practice tests in perfect exam-like conditions. of course, this isn't always going to be the case: many students will cut corners with such things as taking excessively long breaks, pressing the pause button, and so on.
(3) study plan: many of our students exhaust our practice tests early, continue to study for another month or two, and then evaluate the rest of their preparatory trajectory with other practice tests (such as gmat prep). it would be disingenuous at best, and downright dishonest at worst, to compare such students' results to the results obtained by students who took the official exam right after finishing our battery of tests.
(4) random variation: remember that the official exam, even despite its adaptive nature, still has a relatively sizable standard error -- approximately 30 points. this means that there is going to be considerable "noise" in your exam results. therefore, if the delta between our practice tests and the real test is small enough, it will be completely obliterated by this random noise.
the short answer is, basically, no, we don't collect aggregate data -- and we have plenty of good reasons not to.
despite the lack of exhaustive data, however, from many students' feedback we CAN claim a high degree of fidelity between our practice tests and the real thing.
Also, I noticed two surprising things in the MGMAT:
1: Even if I do 25 questions correct in my verbal, my verbal score ranges >=38. But in GMAT Prep I didn't see this phenomenon. I did one GMAT prep and intentionally marked wrong answers and got 25 correct in verbal out of 41. My Verbal score came out to be 30. Perhaps my analysis is wrong, but that's what I want to know from the instructors.
well, remember, it's an adaptive test!
if you're taking our practice tests and giving an honest effort, then you are probably missing the harder questions on the section. on the other hand, if you're simply marking random wrong answers on gmat prep, then you are probably giving wrong answers to a bunch of easier questions in addition to the harder ones.
this discrepancy would of course have a massive effect on your overall score -- even with an identical number of incorrect answers, you're going to have a much higher score overall if those incorrect answers occur almost exclusively on difficult questions.
2: The last two tests of MGMAT skewed my score to 750+. Is it because MGMAT wants to boost the morale of the students during its last two tests? I'm asking this because I answered the same number of questions correctly in the last two tests as in the fourth test in which I got 680. But the score delta was too much. Please explain.
you may want to go back to those tests and take a look at the reported difficulty levels of the questions.
one problem that currently exists on our cat tests -- a problem that we're working on remedying -- is the lack of sufficient 700-800 problems to accommodate students with scores as high as yours. as a result, such students (including you, i would presume) tend to run out of these questions around the fifth exam. as a result, the adaptive algorithm uses the hardest questions remaining in the pool -- the 600-700 questions -- but must extrapolate your performance therefrom. if you nail those questions, it's going to attribute an extremely high score to your performance.