You had an issue where a significant base of your players couldn't log into this game (and ONLY this game/app) for over 6 hours and you aren't going to do anything?
We all understand the internet is complicated, but when it affects ONLY YOUR APP then you did something wrong on your end - or your vendors that you make use of made a mistake. Maybe it's the servers your vendors use for your Canadian customers, or those in the Netherlands, or some in the UK. At least give us some explanation... Or else we'll just get used to a permanent outage, and up and leave this inconsistent experience permanently.
You had an issue where a significant base of your players couldn't log into this game (and ONLY this game/app) for over 6 hours and you aren't going to do anything?
You have no evidence that ONLY this game app was affected. In addition even if it was it doesn't necessarily make it DB fault or responsibility.
I am writing to your customer services because I bought a Ford Ka from one of your dealerships.
Yesterday the local road was closed. I'm not sure why it was closed but my neighbours and I were unable to use our cars for 6 hours. All my friends have said that their local roads were fine.
As this is obviously your fault please could you compensate me.
I am writing to your customer services because I bought a Ford Ka from one of your dealerships.
Yesterday the local road was closed. I'm not sure why it was closed but my neighbours and I were unable to use our cars for 6 hours. All my friends have said that their local roads were fine.
As this is obviously your fault please could you compensate me.
Yours sincerely,
Ann Gry
Not the same thing at all. It's not like it affected a couple of people in, say, Denmark and everyone else was fine. People from all over the world were affected by this outage and noone who posted here noticed problems with anything but their Timelines and the DB forums.
This may or may not have been DB's fault, as I said, but it does make it more likely that the problem was located near their end. Hard to tell, but shouldn't it be in DB's interest to investigate this?
And who's asking for compensation? We're just expressing our surprise that DB isn't trying to find out what went wrong.
So we have looked into it and nothing stood out.
On our end everything was working fine, requests were coming in as well.
The fact that players could reach the game by using a 3G/4G connection is indicative of a world wide web issue.
None of our providers reported any outage.
From the reports I have seen I can confirm that:
- it didn't affect everyone around the world, a few specific regions only
- it affected only a small part of our players.
This does not minimize the inconvenience this caused to those affected but it does make it hard to track what could have caused the issue.
So we have looked into it and nothing stood out.
On our end everything was working fine, requests were coming in as well.
The fact that players could reach the game by using a 3G/4G connection is indicative of a world wide web issue.
None of our providers reported any outage.
From the reports I have seen I can confirm that:
- it didn't affect everyone around the world, a few specific regions only
- it affected only a small part of our players.
This does not minimize the inconvenience this caused to those affected but it does make it hard to track what could have caused the issue.
Thank you Shan, I appreciate the effort put into looking at this and the acknowledgement of our frustrations
So we have looked into it and nothing stood out.
On our end everything was working fine, requests were coming in as well.
The fact that players could reach the game by using a 3G/4G connection is indicative of an world wide web issue.
None of our providers reported any outage.
From the reports I have seen I can confirm that:
- it didn't affect everyone around the world, a few specific regions only
- it affected only a small part of our players.
This does not minimize the inconvenience this caused to those affected but it does make it hard to track what could have caused the issue.
Thank you for investigating further.
If I may ask you to inquire one specific question.. were any of the db servers upgraded, updated, modified, changed for improved / new IpV6 usage in anyway.?
I know it's a bit of a long shot, but it was a commonality I noticed.
My phone doesn't use IpV6, but did when on wifi. Except when I went thru a VPN about 3/4 hours later, the VPN didn't support V6, so it was V4
And I realize that there are many bumps and paths along the way to make a connection from here to there, but just with the common factors in mind.. it's one more route that should be looked at as an issue.
(Cause if it was a test for instance on new server routing path ways for example and everything shows green on your end the diagnosis may not realize that there is an issue with compatibility, I've seen this a couple times with other large servers being updated for trying to bridge ipv4/ipv6)
Also.. and just curious, why didn't anyone reply on the weekend?, even with a simple "we don't see anything on our end" or the like ? I know it wouldn't likely be main staff, but with 3 threads of people saying there was a problem im very stunned that no one said anything at all.
It wasn't a holiday weekend..
If it was I would completely understand a communications black out..
Just so I understand and so I can understand what to expect in the future when something else will go wrong on a weekend.
Thank you for looking into this, Shan. I fully understand that there are alot of moving parts with these sorts of wide-ranging game outages.
For those saying this is a non-issue, I get that it's easy to blame the user / "internet reasons" because each case can be so unique - but obviously there were some clear patterns here that I - and other users - believe are worth investigating. These were factors that impacted this game ONLY for the majority of users reporting back their experience here. I'm just providing my experience in the hopes these issues don't occur again (particularly during an event):
As someone in one of the region's affected, I tested the game on every possible platform and connection type available to me - it did not work on wifi, on 3G/4G/LTE. It did not work for me on iPhone, Android, Facebook, or Steam. The experience was the same, with the screen stopped at "Communicating with Starfleet..."
During this time, I tested my internet connection on a wide variety of other apps and games, and only Star Trek Timelines was affected.
As someone that has working in QA and UX testing of digital platforms in the past, I didn't implement a full testing strategy to isolate the issue, but I certainly tried my darndest to pinpoint the defining factors for this issue. (I personally hate when users come with their issues but haven't tried to isolate the causal factors beforehand)
I tried logging in at home, while out of the house, and even on the garbage wifi at McDonald's.
I didn't get a sniff of the game until I logged in 6+ hours later, when the other users impacted also saw the issue was resolved This issue affected ONLY Star Trek Timelines for me, and none of the other 50+ apps or PC programs that I tested over this period.
It became clear during the outage that issues seemed to be regional-based and not device or user-related, with other users on the forum providing similar location details.
I'm not looking for compensation. I'm not looking to stir the pot. I just want to make sure that the patterns that I'm seeing - and other users impacted saw as well - were consistent with an outage that only affected Star Trek Timelines. Ultimately, I don't care where the blame falls - but if my experience, and those of others reported here can help to prevent this type of downtime going forward, then hopefully it's been worth the effort.
Thank you Shan for requesting some further investigation, even if you were unable to locate a problem this time. Perhaps if it happens again on a weekday and investigation were to happen live during an issue it would be easier to identify, especially since it sounds like a high likelihood the problem is not functionality of the game servers but some kind of internet regional routing issue.
Also thank you Capn Capacitor for such a detailed explanation of your testing, you certainly tested more options than I did. I would suspect that some people's LTE connection would be on a different internet provider than their home wifi connection, and therefore very likely would use different internet server ip routing and DNS servers. Since the issue was noted by different continents, perhaps Rogers in Canada and Virgin in the UK or whatever, happen to use the same last internet "hop" or two in a tracert to DB's servers, which is different from that used by Bell, etc. Or have a common DNS server that was having an issue, etc.
Hopefully this information is helpful as reported in case it happens again. I am also really happy to see how positive the messages were and not degrading into anger or anything, great customer reporting AND customer service response.
So we have looked into it and nothing stood out.
On our end everything was working fine, requests were coming in as well.
The fact that players could reach the game by using a 3G/4G connection is indicative of a world wide web issue.
None of our providers reported any outage.
From the reports I have seen I can confirm that:
- it didn't affect everyone around the world, a few specific regions only
- it affected only a small part of our players.
This does not minimize the inconvenience this caused to those affected but it does make it hard to track what could have caused the issue.
Thanks Shan for investigating the case. I obviously believe DB has its Servers hired on a big cloud computing company in US, the question is for DB. Don't you think the hosting provider should have redundant connections with every major Internet company?
I will bring your concerns to the team, as I do for other issues.
We have not made any recent changes involving IPv4/IPv6.
As for coverage on the weekends, this is done mainly via our Support Team responding to tickets.
Forums coverage is weekdays mainly unless we are alerted of a liveops issue on our end.
I have been locked out for over a week with an active daily dilithium reward purchased. All I have gotten is reset your password from support which I told them in the first support ticket that I have already done to no change. They’ve gone dark on responding to me since last week, I am one of the actual paying customers not that it should matter and I haven’t had **tsk tsk** for help. WTF!!!
These symptoms look similar to what I've seen in the past where a BGP setting was misconfiguration or a main routing trunk is down. Surprisingly, when you're talking about the internet on that level, there sometimes is only one route between two geographical location.
In the immortal words of Spock: "Live long and prosper"
Ok.. so now we know, similar issue, EVERYONE sends tickets on weekends, we just decide here what the common subject title will be.
Thank you.
If something similar should happen again the low number of unique players reporting it here and on our social spaces can be a helpful indication that the issue is not on our end.
I have been locked out for over a week with an active daily dilithium reward purchased. All I have gotten is reset your password from support which I told them in the first support ticket that I have already done to no change. They’ve gone dark on responding to me since last week, I am one of the actual paying customers not that it should matter and I haven’t had **tsk tsk** for help. WTF!!!
This is not the same issue.
I am sorry that it is taking a long time to find a solution for your specific case.
You will be compensated for the days you were unable to claim your daily Dilithium.
It may well have been an internet peering/routing issue caused by the storm, but in my opinion that still makes it DB’s responsibility.
Unlike a home internet connection, you don’t just take a server or bank of servers, connect them up to a network socket and hope for the best.
As a business providing a service, you have to ensure there are multiple solid routes in and out of the location for redundancy and load balancing. Potentially you would have a server in a completely different location to help handle this sort of issue. If a portion of your customers have no access at all, that’s a situation you want to avoid no matter how small. You would either invest in suitable redundancy or you turn to a supplier or data centre to manage this on your behalf.
Even in case of the latter, you should be asking the supplier what and how many backup routes/peers they are using and what has gone wrong on this occasion. If they tell you nothing is wrong, then they are lying or unwilling to investigate and it’s time for you to find a different supplier.
Comments
We all understand the internet is complicated, but when it affects ONLY YOUR APP then you did something wrong on your end - or your vendors that you make use of made a mistake. Maybe it's the servers your vendors use for your Canadian customers, or those in the Netherlands, or some in the UK. At least give us some explanation... Or else we'll just get used to a permanent outage, and up and leave this inconsistent experience permanently.
You have no evidence that ONLY this game app was affected. In addition even if it was it doesn't necessarily make it DB fault or responsibility.
I am writing to your customer services because I bought a Ford Ka from one of your dealerships.
Yesterday the local road was closed. I'm not sure why it was closed but my neighbours and I were unable to use our cars for 6 hours. All my friends have said that their local roads were fine.
As this is obviously your fault please could you compensate me.
Yours sincerely,
Ann Gry
Not the same thing at all. It's not like it affected a couple of people in, say, Denmark and everyone else was fine. People from all over the world were affected by this outage and noone who posted here noticed problems with anything but their Timelines and the DB forums.
This may or may not have been DB's fault, as I said, but it does make it more likely that the problem was located near their end. Hard to tell, but shouldn't it be in DB's interest to investigate this?
And who's asking for compensation? We're just expressing our surprise that DB isn't trying to find out what went wrong.
On our end everything was working fine, requests were coming in as well.
The fact that players could reach the game by using a 3G/4G connection is indicative of a world wide web issue.
None of our providers reported any outage.
From the reports I have seen I can confirm that:
- it didn't affect everyone around the world, a few specific regions only
- it affected only a small part of our players.
This does not minimize the inconvenience this caused to those affected but it does make it hard to track what could have caused the issue.
Thank you Shan, I appreciate the effort put into looking at this and the acknowledgement of our frustrations
Thank you for investigating further.
If I may ask you to inquire one specific question.. were any of the db servers upgraded, updated, modified, changed for improved / new IpV6 usage in anyway.?
I know it's a bit of a long shot, but it was a commonality I noticed.
My phone doesn't use IpV6, but did when on wifi. Except when I went thru a VPN about 3/4 hours later, the VPN didn't support V6, so it was V4
And I realize that there are many bumps and paths along the way to make a connection from here to there, but just with the common factors in mind.. it's one more route that should be looked at as an issue.
(Cause if it was a test for instance on new server routing path ways for example and everything shows green on your end the diagnosis may not realize that there is an issue with compatibility, I've seen this a couple times with other large servers being updated for trying to bridge ipv4/ipv6)
Also.. and just curious, why didn't anyone reply on the weekend?, even with a simple "we don't see anything on our end" or the like ? I know it wouldn't likely be main staff, but with 3 threads of people saying there was a problem im very stunned that no one said anything at all.
It wasn't a holiday weekend..
If it was I would completely understand a communications black out..
Just so I understand and so I can understand what to expect in the future when something else will go wrong on a weekend.
Thank you.
For those saying this is a non-issue, I get that it's easy to blame the user / "internet reasons" because each case can be so unique - but obviously there were some clear patterns here that I - and other users - believe are worth investigating. These were factors that impacted this game ONLY for the majority of users reporting back their experience here. I'm just providing my experience in the hopes these issues don't occur again (particularly during an event):
As someone in one of the region's affected, I tested the game on every possible platform and connection type available to me - it did not work on wifi, on 3G/4G/LTE. It did not work for me on iPhone, Android, Facebook, or Steam. The experience was the same, with the screen stopped at "Communicating with Starfleet..."
During this time, I tested my internet connection on a wide variety of other apps and games, and only Star Trek Timelines was affected.
As someone that has working in QA and UX testing of digital platforms in the past, I didn't implement a full testing strategy to isolate the issue, but I certainly tried my darndest to pinpoint the defining factors for this issue. (I personally hate when users come with their issues but haven't tried to isolate the causal factors beforehand)
I tried logging in at home, while out of the house, and even on the garbage wifi at McDonald's.
I didn't get a sniff of the game until I logged in 6+ hours later, when the other users impacted also saw the issue was resolved This issue affected ONLY Star Trek Timelines for me, and none of the other 50+ apps or PC programs that I tested over this period.
It became clear during the outage that issues seemed to be regional-based and not device or user-related, with other users on the forum providing similar location details.
I'm not looking for compensation. I'm not looking to stir the pot. I just want to make sure that the patterns that I'm seeing - and other users impacted saw as well - were consistent with an outage that only affected Star Trek Timelines. Ultimately, I don't care where the blame falls - but if my experience, and those of others reported here can help to prevent this type of downtime going forward, then hopefully it's been worth the effort.
Cheers,
CapCap
Also thank you Capn Capacitor for such a detailed explanation of your testing, you certainly tested more options than I did. I would suspect that some people's LTE connection would be on a different internet provider than their home wifi connection, and therefore very likely would use different internet server ip routing and DNS servers. Since the issue was noted by different continents, perhaps Rogers in Canada and Virgin in the UK or whatever, happen to use the same last internet "hop" or two in a tracert to DB's servers, which is different from that used by Bell, etc. Or have a common DNS server that was having an issue, etc.
Hopefully this information is helpful as reported in case it happens again. I am also really happy to see how positive the messages were and not degrading into anger or anything, great customer reporting AND customer service response.
Thanks Shan for investigating the case. I obviously believe DB has its Servers hired on a big cloud computing company in US, the question is for DB. Don't you think the hosting provider should have redundant connections with every major Internet company?
We have not made any recent changes involving IPv4/IPv6.
As for coverage on the weekends, this is done mainly via our Support Team responding to tickets.
Forums coverage is weekdays mainly unless we are alerted of a liveops issue on our end.
Thank you.
If something similar should happen again the low number of unique players reporting it here and on our social spaces can be a helpful indication that the issue is not on our end.
This is not the same issue.
I am sorry that it is taking a long time to find a solution for your specific case.
You will be compensated for the days you were unable to claim your daily Dilithium.
Unlike a home internet connection, you don’t just take a server or bank of servers, connect them up to a network socket and hope for the best.
As a business providing a service, you have to ensure there are multiple solid routes in and out of the location for redundancy and load balancing. Potentially you would have a server in a completely different location to help handle this sort of issue. If a portion of your customers have no access at all, that’s a situation you want to avoid no matter how small. You would either invest in suitable redundancy or you turn to a supplier or data centre to manage this on your behalf.
Even in case of the latter, you should be asking the supplier what and how many backup routes/peers they are using and what has gone wrong on this occasion. If they tell you nothing is wrong, then they are lying or unwilling to investigate and it’s time for you to find a different supplier.