We should discuss how to gather some data. While denying the issue exists is stupid, our crazy Aussie is right that we need some data to isolate what factors are having an effect.
Ultimately, when this is hopefully brought to PD's attention, data gathered could be useful in arguing our point.
Say, a questionnaire on surveymonkey. Ask about perceived frequency of occurrence, some controls (like DS3 users, NAT type, bandwidth etc) and some sample controls (GTP WRS division, member of online leagues/clubs etc). Exact questions should be discussed and then, with help of some moderators, ask for the survey to be featured in the news. Statistically we'll have a major issue of people quitting GT5 because of the issue we're investigating, but in no way this makes the whole exercise worthless.
Thanks, Dan, I completely agree. I know you meant the other crazy Australian ;-)
I still can't quite believe no one has logged races through wireshark/netmon et al, but I shall try to attend to that in coming weeks. The magazine latency test was interesting.
What I was alluding to earlier was to do with trying to sort out what is going on from data that is actually available for us to test with - the replay wall clock issue is one, and comparing enough replays at 12-16 hosts certainly won't hurt.
Estimated figures for bandwidth per person for the fixed/mesh and room quality types would be useful, and should be possible. With enough data for differential analysis, you would be able to work out how GT5's code changes packet distribution, if indeed it does, in a mesh.
It is worth noting (again, as others have said) that a replay of an online race is only an approximation for any one other than the one who recorded it - jumpiness/twitchiness and freaky driving are usually indicators of lag to a given driver from the recording player.
How much another driver and network problems affect you can be agreed to depend at least at minimum on collision detection zones, and draft (both complicated by gaps in telemetry).
Rubber on track, armco movement, weather, and damage are other possibles, but the only way it can be turned into a falsifiable test is running the same participants with one factor you can actually change (for visible damage, say) with relatively stable conditions (someone's sibling/child firing up limewire, iView/bbcplayer/netflix et al partway through clearly isn't stable conditions). Record all replays, hopefully at least one party log it through wireshark, lather, rinse repeat. All change from premiums to standards, go again. Flip to strong draft, try again. Drop 1 or 2 participants, do that at least twice, and so forth. Preferably have speedtest/measurementlabs NDT results for everyone between each race, without leaving the room. Then move to a different track. Repeat until you have enough data points. Ideally, have more than one party (possibly use an observer in live timing) video each race to compare to replays (
not livestreamed, Speedy!
)
Then change to fixed host, and start again.
Sound time consuming?
Large data sets from qualitative survey data does sound easier. Apologies for the wall of text, didn't have time to make it shorter.
EDIT: Also, do what Cicua said.