some people need to understand how data centers and backbone ISP's actually work. So let's use analogy
Let's take your place of employment. Let's pretend there is one major interstate (or arterial road or highway) that leads to your place of employment. Let's also pretend that one major way of getting to work gets shut down because of an accident/construction.
It is not the fault of your employer that you either a) cannot get to work b) get delayed in getting to work. Your place of employment is still open. All roads in/out of your place of employment are still up and running. Your employer is not responsible for any of the highways or government sponsored highways.
So let's bring it back to DB.
Disruptor Beam rents out services with Amazon Web Services (AWS). AWS is one of the biggest cloud providers out there, behind only Microsoft's Azure services. AWS has a robust back end system. DB chooses to pay how redundant their systems are, but AWS's datacenters are rock solid. AWS has SLA's with their major backbone ISP providers to maintain uptime. However, you are still at the mercy of other backbone providers to actually be able to make it to AWS's datacenters.
The entire internet is build like a set of various connecting highways. If some major ISP/router somewhere in the chain goes down, it's not DB's fault, unless AWS itself goes down, or one of their servers crashes.
DB could pay more $$$ to replicate their servers to additional AWS datacenters to offset potential issues such as major backbone provider outages, but costs would go through the roof and the game would no longer be sustainable.
So what? You miss out on a legendary? big deal. Its not the end of the world. And as they said if you were impacted submit a ticket (and the implied quit whining about it).
So what? You miss out on a legendary? big deal. Its not the end of the world. And as they said if you were impacted submit a ticket (and the implied quit whining about it).
No.... I wasn't negatively affected. I realise that downtime for a lot of players affected myself and others positively though.....which simply means it's not a fair competition.
some people need to understand how data centers and backbone ISP's actually work. So let's use analogy
Let's take your place of employment. Let's pretend there is one major interstate (or arterial road or highway) that leads to your place of employment. Let's also pretend that one major way of getting to work gets shut down because of an accident/construction.
It is not the fault of your employer that you either a) cannot get to work b) get delayed in getting to work. Your place of employment is still open. All roads in/out of your place of employment are still up and running. Your employer is not responsible for any of the highways or government sponsored highways.
So let's bring it back to DB.
Disruptor Beam rents out services with Amazon Web Services (AWS). AWS is one of the biggest cloud providers out there, behind only Microsoft's Azure services. AWS has a robust back end system. DB chooses to pay how redundant their systems are, but AWS's datacenters are rock solid. AWS has SLA's with their major backbone ISP providers to maintain uptime. However, you are still at the mercy of other backbone providers to actually be able to make it to AWS's datacenters.
The entire internet is build like a set of various connecting highways. If some major ISP/router somewhere in the chain goes down, it's not DB's fault, unless AWS itself goes down, or one of their servers crashes.
DB could pay more $$$ to replicate their servers to additional AWS datacenters to offset potential issues such as major backbone provider outages, but costs would go through the roof and the game would no longer be sustainable.
As an internet connection is required to play, it’s a critical part of the whole Timelines game. As such, they absolutely should be paying for the best connectivity options from AWS and redundancy options like mirrored servers in other locations.
If the game didn’t depend on this active connection between server and client, I would agree that being able to talk to DB’s server would just be a ‘nice to have’, that can stand to be put on a basic connectivity service.
Even after considering the above, it’s not fair to continue to start an in-game event when it’s clear there is a major issue going on. Put it on hold/post pone/cancel until the issues are sorted. To continue inspite of these infrastructure issues suggests that DB took a business decision based on persuading players its all totally out of their control (“teh internets”) and that most wont understand what they are entitled to. Very shoddy customer service.
some people need to understand how data centers and backbone ISP's actually work. So let's use analogy
Let's take your place of employment. Let's pretend there is one major interstate (or arterial road or highway) that leads to your place of employment. Let's also pretend that one major way of getting to work gets shut down because of an accident/construction.
It is not the fault of your employer that you either a) cannot get to work b) get delayed in getting to work. Your place of employment is still open. All roads in/out of your place of employment are still up and running. Your employer is not responsible for any of the highways or government sponsored highways.
So let's bring it back to DB.
Disruptor Beam rents out services with Amazon Web Services (AWS). AWS is one of the biggest cloud providers out there, behind only Microsoft's Azure services. AWS has a robust back end system. DB chooses to pay how redundant their systems are, but AWS's datacenters are rock solid. AWS has SLA's with their major backbone ISP providers to maintain uptime. However, you are still at the mercy of other backbone providers to actually be able to make it to AWS's datacenters.
The entire internet is build like a set of various connecting highways. If some major ISP/router somewhere in the chain goes down, it's not DB's fault, unless AWS itself goes down, or one of their servers crashes.
DB could pay more $$$ to replicate their servers to additional AWS datacenters to offset potential issues such as major backbone provider outages, but costs would go through the roof and the game would no longer be sustainable.
No. You have gotten your analogy backwards.
Your employer pays you to get to work. It does not care how you get there or what your excuses for not turning up are. You don't turn up, you don't get paid.
We pay DB to use their service. We do not care for the nitty gritty of how they provide that service or what their excuses for not providing it are. They don't provide it, they don't get paid. Except - they've already taken our money and are refusing to compensate us for being unable to access the service.
The entire internet is build like a set of various connecting highways. If some major ISP/router somewhere in the chain goes down, it's not DB's fault, unless AWS itself goes down, or one of their servers crashes.
since we are using the highway example, if you hire a cab driver to pick you up at your house at a certain moment, at a given hour, anď he can't get there on time 'cause the road he was going to use is inadvertenly closed, should he be held responsable? (To you who hired him)
The entire internet is build like a set of various connecting highways. If some major ISP/router somewhere in the chain goes down, it's not DB's fault, unless AWS itself goes down, or one of their servers crashes.
since we are using the highway example, if you hire a cab driver to pick you up at your house at a certain moment, at a given hour, anď he can't get there on time 'cause the road he was going to use is inadvertenly closed, should he be held responsable? (To you who hired him)
If it turns out that cab company had only paid for access to just 1 of the roads into your town, when 3 other roads are available, then yes, it’s still the cab firm at fault.
The entire internet is build like a set of various connecting highways. If some major ISP/router somewhere in the chain goes down, it's not DB's fault, unless AWS itself goes down, or one of their servers crashes.
since we are using the highway example, if you hire a cab driver to pick you up at your house at a certain moment, at a given hour, anď he can't get there on time 'cause the road he was going to use is inadvertenly closed, should he be held responsable? (To you who hired him)
Yes, he should have planned a different route.,...
We pay DB to use their service. We do not care for the nitty gritty of how they provide that service or what their excuses for not providing it are. They don't provide it, they don't get paid. Except - they've already taken our money and are refusing to compensate us for being unable to access the service.
The fact that you "don't care for the nitty gritty" is on you, and is nothing other than ignorance
Again, DB is NOT responsible if your ISP's backbone services are degraded, or if some other border router is down. All that matters, is that their AWS services are up and running and AWS's ISP's are running at pre-approveed SLA levels. If there's an outage somewhere down the chain, that is not DB's fault. Disruptor Beam is not responsible for the entire world's internet backbone being running at full efficiency.
As an internet connection is required to play, it’s a critical part of the whole Timelines game. As such, they absolutely should be paying for the best connectivity options from AWS and redundancy options like mirrored servers in other locations.
If the game didn’t depend on this active connection between server and client, I would agree that being able to talk to DB’s server would just be a ‘nice to have’, that can stand to be put on a basic connectivity service.
If one of AWS's backbone transit ISP's went down, yes AWS would be on the hook to reimburse/credit Disruptor Beam, and they would, in turn, have to credit the player base in some form or fashion. But we don't know if the outage was one of AWS's ISP's, or just some peer down the road, which AWS is not responsible for.
From looking at DB's dns records, their servers are likely housed in the amazon west-2 Oregon DC. if some backbone ISP in Ohio dies, it's not Amazon's fault.
I am now caught up on this thread, and I feel a sense of loss here, like this was a lost opportunity.
I think most of us realize that this was in no way the fault of DB (I understand there is not consensus on this, but to paraphrase Meatloaf, "14 out of 15 aint bad!") and under no obligation to do anything for anybody, but man, I'm not gonna lie, this feels like the perfect opportunity to mend a few fences and win back some goodwill, and now it seems that this opportunity has been wasted.
Even if it was just something like, "Hey Captains, while this was not our fault, we value you and to try and pick up your spirits, here is a 4* citation. We are truly sorry that this happened and we are working to identify the issue to take every step to assure it does not happen again."
Nobody is going to stick their noses up at a free 4* citation (well again, to paraphrase my earlier paraphrase, I'm absolutely certain 1 out of 15 will!) and it would have IMO gone a great deal toward establishing a greater sense of community and goodwill.
I'm still trying to wrap my head around the fact that the Provider who does this game also does EMERGENCY CALL CENTERS and apparently did not have redundancies in place.......
Considering the issue is not at DB, but a CenturyLink outage that has closed 911 centers in Multiple states. I am not sure there should be any compensation. DB has no control of this issue.
I disagree. DB have full control over what provider they use for external connectivity. They also have full control over what backups are in place for when something fails.
They have control over who they use. But who does Your provider use to get the their provider. I have Centurylink for my landline only. Everything else uses either Verizon or a local Cable provider. I never had connectivity issues. but I was woken ope in the middle of the night by my cell phone as the alerts went out about no 911 service in the state.
Considering the issue is not at DB, but a CenturyLink outage that has closed 911 centers in Multiple states. I am not sure there should be any compensation. DB has no control of this issue.
I disagree. DB have full control over what provider they use for external connectivity. They also have full control over what backups are in place for when something fails.
They have control over who they use. But who does Your provider use to get the their provider. I have Centurylink for my landline only. Everything else uses either Verizon or a local Cable provider. I never had connectivity issues. but I was woken ope in the middle of the night by my cell phone as the alerts went out about no 911 service in the state.
Any internet based business whuch doesn't have a solid SLA with their provider will not be in business very long (as long as there is competition in the market).
I seem to have a better SLA with my ISP and I am a home user!
I'm still trying to wrap my head around the fact that the Provider who does this game also does EMERGENCY CALL CENTERS and apparently did not have redundancies in place.......
They do not. The outage is at CeturyLink.
Was going by what someone else had posted. So, people didn't lose 911? That is good.
Why was Vicki not expelled from Greendale after she literally stabbed Pierce in the face with a pencil?!?!?
I am now caught up on this thread, and I feel a sense of loss here, like this was a lost opportunity.
I think most of us realize that this was in no way the fault of DB (I understand there is not consensus on this, but to paraphrase Meatloaf, "14 out of 15 aint bad!") and under no obligation to do anything for anybody, but man, I'm not gonna lie, this feels like the perfect opportunity to mend a few fences and win back some goodwill, and now it seems that this opportunity has been wasted.
Even if it was just something like, "Hey Captains, while this was not our fault, we value you and to try and pick up your spirits, here is a 4* citation. We are truly sorry that this happened and we are working to identify the issue to take every step to assure it does not happen again."
Nobody is going to stick their noses up at a free 4* citation (well again, to paraphrase my earlier paraphrase, I'm absolutely certain 1 out of 15 will!) and it would have IMO gone a great deal toward establishing a greater sense of community and goodwill.
Second paragraph is a good summary of my point, if anyone had trouble following me. {I occasionally wander off-point.} Excellent opportunity to Foster and revitalize Goodwill.
"This was totally beyond our control, but we know it impacted a lot of players during a Fraction Event. Here is something For everyone's inconvenience."
Ones and zeros are free. Happy, satisfied customers are priceless.
Why was Vicki not expelled from Greendale after she literally stabbed Pierce in the face with a pencil?!?!?
I'm still trying to wrap my head around the fact that the Provider who does this game also does EMERGENCY CALL CENTERS and apparently did not have redundancies in place.......
They do not. The outage is at CeturyLink.
Was going by what someone else had posted. So, people didn't lose 911? That is good.
The questions is, if whoever failed to provide the service compensates their customers, and those compensate their customers as well, and so on, following that chain of compensations would get to the affectef players, or it would reach DB first?
Considering the issue is not at DB, but a CenturyLink outage that has closed 911 centers in Multiple states. I am not sure there should be any compensation. DB has no control of this issue.
I disagree. DB have full control over what provider they use for external connectivity. They also have full control over what backups are in place for when something fails.
They have control over who they use. But who does Your provider use to get the their provider. I have Centurylink for my landline only. Everything else uses either Verizon or a local Cable provider. I never had connectivity issues. but I was woken ope in the middle of the night by my cell phone as the alerts went out about no 911 service in the state.
I appreciate the angle you’re coming from, and no disrespect intended, but this issue is compounded by a general lack of understanding of internet based services, connectivity and reasonable levels of access.
Many people are blissfully unaware of how IT and telecoms works, but there is actually a huge industry powering these type of services. I have spent several years in that career, knowledge of disaster planning, backup connections, redundancy, SLAs, contingency plans are essential skills. Planning for failures, even multiple failures happening at the same time (despite odds worse than obtaining the gauntlet gold crew!) is a given.
A company that offers a web based service to customers, based on the service requiring an always on internet connection from their customer, whom is challenging customers to take part in a time sensitive event which the service provider refuses to cancel/postpone, should only ever do so on the basis that they either a) have a rock solid backup plan or b) are happy to go above and beyond putting mistakes right when they happen.
I’m reading comments here from people that seem to understand either technology or business, or a little of both, but it’s still painfully obvious that DB are able to palm this off as a “global internet issue”, which is simply not true.
For those that understand the relevant IT, take a step back to consider:
- DB chose the hosts (AWS?). Could have been hosted elsewhere.
- Could be hosted with mirrored servers in other countries.
- DB chose the hosting package. Which may include connectivity, uptime commitments, SLA’s etc..
- DB chose the level (or lack of) redundancy. Eg Mirrored servers, backup internet connections via differing technologies (not reliant on the same backbones).
Sure, this all comes at increasing costs, but that’s got to be factored in to DB’s business plan. It is certainly not the customers problem.
Please can I request we also stop bringing 911 call centres into this. I don’t understand the circumstances behind those issues, but it’s completely feasible that the issues were just as avoidable as what we’re seeing here with STT. It may be poor IT management or it may be underinvestment, who knows.
Considering the issue is not at DB, but a CenturyLink outage that has closed 911 centers in Multiple states. I am not sure there should be any compensation. DB has no control of this issue.
I disagree. DB have full control over what provider they use for external connectivity. They also have full control over what backups are in place for when something fails.
They have control over who they use. But who does Your provider use to get the their provider. I have Centurylink for my landline only. Everything else uses either Verizon or a local Cable provider. I never had connectivity issues. but I was woken ope in the middle of the night by my cell phone as the alerts went out about no 911 service in the state.
Any internet based business whuch doesn't have a solid SLA with their provider will not be in business very long (as long as there is competition in the market).
I seem to have a better SLA with my ISP and I am a home user!
But if you had CenturyLink at home you'd be down too. Also may providers lease their service lines from someone else so even if you had a redundancy, that may have been down too. It doesn't matter how much redundancy you have in place, at some point you can't switch over when so much is down and so many companies lease from other companies, etc. Its crazy and this was definitely an extreme case that impacted Phone and Internet across the country.
I'm still trying to wrap my head around the fact that the Provider who does this game also does EMERGENCY CALL CENTERS and apparently did not have redundancies in place.......
They do not. The outage is at CeturyLink.
Was going by what someone else had posted. So, people didn't lose 911? That is good.
WE DID!
So, two providers are down at the same time? And one supports ECCs? Has anyone made sure it was an accident?!?!?
Why was Vicki not expelled from Greendale after she literally stabbed Pierce in the face with a pencil?!?!?
I'm still trying to wrap my head around the fact that the Provider who does this game also does EMERGENCY CALL CENTERS and apparently did not have redundancies in place.......
They do not. The outage is at CeturyLink.
Was going by what someone else had posted. So, people didn't lose 911? That is good.
I think the comment was more that DB does not do Call Centers, CenturyLink does and the yes 911 was down in many places.
I'm still trying to wrap my head around the fact that the Provider who does this game also does EMERGENCY CALL CENTERS and apparently did not have redundancies in place.......
They do not. The outage is at CeturyLink.
Was going by what someone else had posted. So, people didn't lose 911? That is good.
I appreciate the angle you’re coming from, and no disrespect intended, but this issue is compounded by a general lack of understanding of internet based services, connectivity and reasonable levels of access.
Many people are blissfully unaware of how IT and telecoms works, but there is actually a huge industry powering these type of services. I have spent several years in that career, knowledge of disaster planning, backup connections, redundancy, SLAs, contingency plans are essential skills. Planning for failures, even multiple failures happening at the same time (despite odds worse than obtaining the gauntlet gold crew!) are expected to be planned for.
...
DB chose the level (or lack of) redundancy. Eg Mirrored servers, backup internet connections via differing technologies (not reliant on the same backbones). Sure that comes at increasing costs, but that’s not the customers problem.
You're welcome to make comments about us not understanding this particular industry as well as you do, and you're right that I don't know the first thing about this industry, but I'm also not convinced you understand how business works in general.
If DB's expenses go up, you better believe our costs will go up - they chose a price point that allowed them to get the redundancy they could afford considering how much they charge their customers. I'm sure they could probably pay to have the single best damned redundancy service any company could offer, but I'm also sure that the only way they could afford that is by increasing the price points that we pay because that money needs to come from somewhere, and frankly I'm not sure how many of us would be interested in paying $500 for a monthly dilithium pass.
I appreciate the angle you’re coming from, and no disrespect intended, but this issue is compounded by a general lack of understanding of internet based services, connectivity and reasonable levels of access.
Many people are blissfully unaware of how IT and telecoms works, but there is actually a huge industry powering these type of services. I have spent several years in that career, knowledge of disaster planning, backup connections, redundancy, SLAs, contingency plans are essential skills. Planning for failures, even multiple failures happening at the same time (despite odds worse than obtaining the gauntlet gold crew!) are expected to be planned for.
...
DB chose the level (or lack of) redundancy. Eg Mirrored servers, backup internet connections via differing technologies (not reliant on the same backbones). Sure that comes at increasing costs, but that’s not the customers problem.
You're welcome to make comments about us not understanding this particular industry as well as you do, and you're right that I don't know the first thing about this industry, but I'm also not convinced you understand how business works in general.
If DB's expenses go up, you better believe our costs will go up - they chose a price point that allowed them to get the redundancy they could afford considering how much they charge their customers. I'm sure they could probably pay to have the single best damned redundancy service any company could offer, but I'm also sure that the only way they could afford that is by increasing the price points that we pay because that money needs to come from somewhere, and frankly I'm not sure how many of us would be interested in paying $500 for a monthly dilithium pass.
The more 9's you want, the exponentially higher your costs go.
Comments
Let's take your place of employment. Let's pretend there is one major interstate (or arterial road or highway) that leads to your place of employment. Let's also pretend that one major way of getting to work gets shut down because of an accident/construction.
It is not the fault of your employer that you either a) cannot get to work b) get delayed in getting to work. Your place of employment is still open. All roads in/out of your place of employment are still up and running. Your employer is not responsible for any of the highways or government sponsored highways.
So let's bring it back to DB.
Disruptor Beam rents out services with Amazon Web Services (AWS). AWS is one of the biggest cloud providers out there, behind only Microsoft's Azure services. AWS has a robust back end system. DB chooses to pay how redundant their systems are, but AWS's datacenters are rock solid. AWS has SLA's with their major backbone ISP providers to maintain uptime. However, you are still at the mercy of other backbone providers to actually be able to make it to AWS's datacenters.
The entire internet is build like a set of various connecting highways. If some major ISP/router somewhere in the chain goes down, it's not DB's fault, unless AWS itself goes down, or one of their servers crashes.
DB could pay more $$$ to replicate their servers to additional AWS datacenters to offset potential issues such as major backbone provider outages, but costs would go through the roof and the game would no longer be sustainable.
Second Star to the Right - Join Today!
No.... I wasn't negatively affected. I realise that downtime for a lot of players affected myself and others positively though.....which simply means it's not a fair competition.
As an internet connection is required to play, it’s a critical part of the whole Timelines game. As such, they absolutely should be paying for the best connectivity options from AWS and redundancy options like mirrored servers in other locations.
If the game didn’t depend on this active connection between server and client, I would agree that being able to talk to DB’s server would just be a ‘nice to have’, that can stand to be put on a basic connectivity service.
Even after considering the above, it’s not fair to continue to start an in-game event when it’s clear there is a major issue going on. Put it on hold/post pone/cancel until the issues are sorted. To continue inspite of these infrastructure issues suggests that DB took a business decision based on persuading players its all totally out of their control (“teh internets”) and that most wont understand what they are entitled to. Very shoddy customer service.
No. You have gotten your analogy backwards.
Your employer pays you to get to work. It does not care how you get there or what your excuses for not turning up are. You don't turn up, you don't get paid.
We pay DB to use their service. We do not care for the nitty gritty of how they provide that service or what their excuses for not providing it are. They don't provide it, they don't get paid. Except - they've already taken our money and are refusing to compensate us for being unable to access the service.
Public profile
Captain Zombie's Combo chain calculator
If it turns out that cab company had only paid for access to just 1 of the roads into your town, when 3 other roads are available, then yes, it’s still the cab firm at fault.
Yes, he should have planned a different route.,...
I wanna say upfront, you are just the messenger, and the message they give you to pass on is totally beyond your control.
Not your fault. Wanna be clear not blaming you for the message!!!!!
However, that is a VERY unsatisfactory answer, from a customer confidence standpoint.
NOT YOUR FAULT, SHAN!!!!! NOT BLAMING YOU FOR THE MESSAGE THEY ARE GIVING YOU TO PASS ON!!!!!
The fact that you "don't care for the nitty gritty" is on you, and is nothing other than ignorance
Again, DB is NOT responsible if your ISP's backbone services are degraded, or if some other border router is down. All that matters, is that their AWS services are up and running and AWS's ISP's are running at pre-approveed SLA levels. If there's an outage somewhere down the chain, that is not DB's fault. Disruptor Beam is not responsible for the entire world's internet backbone being running at full efficiency.
Second Star to the Right - Join Today!
Agreed!
If one of AWS's backbone transit ISP's went down, yes AWS would be on the hook to reimburse/credit Disruptor Beam, and they would, in turn, have to credit the player base in some form or fashion. But we don't know if the outage was one of AWS's ISP's, or just some peer down the road, which AWS is not responsible for.
From looking at DB's dns records, their servers are likely housed in the amazon west-2 Oregon DC. if some backbone ISP in Ohio dies, it's not Amazon's fault.
Second Star to the Right - Join Today!
I think most of us realize that this was in no way the fault of DB (I understand there is not consensus on this, but to paraphrase Meatloaf, "14 out of 15 aint bad!") and under no obligation to do anything for anybody, but man, I'm not gonna lie, this feels like the perfect opportunity to mend a few fences and win back some goodwill, and now it seems that this opportunity has been wasted.
Even if it was just something like, "Hey Captains, while this was not our fault, we value you and to try and pick up your spirits, here is a 4* citation. We are truly sorry that this happened and we are working to identify the issue to take every step to assure it does not happen again."
Nobody is going to stick their noses up at a free 4* citation (well again, to paraphrase my earlier paraphrase, I'm absolutely certain 1 out of 15 will!) and it would have IMO gone a great deal toward establishing a greater sense of community and goodwill.
They do not. The outage is at CeturyLink.
They have control over who they use. But who does Your provider use to get the their provider. I have Centurylink for my landline only. Everything else uses either Verizon or a local Cable provider. I never had connectivity issues. but I was woken ope in the middle of the night by my cell phone as the alerts went out about no 911 service in the state.
Any internet based business whuch doesn't have a solid SLA with their provider will not be in business very long (as long as there is competition in the market).
I seem to have a better SLA with my ISP and I am a home user!
Was going by what someone else had posted. So, people didn't lose 911? That is good.
Second paragraph is a good summary of my point, if anyone had trouble following me. {I occasionally wander off-point.} Excellent opportunity to Foster and revitalize Goodwill.
"This was totally beyond our control, but we know it impacted a lot of players during a Fraction Event. Here is something For everyone's inconvenience."
Ones and zeros are free. Happy, satisfied customers are priceless.
WE DID!
Public profile
Captain Zombie's Combo chain calculator
I appreciate the angle you’re coming from, and no disrespect intended, but this issue is compounded by a general lack of understanding of internet based services, connectivity and reasonable levels of access.
Many people are blissfully unaware of how IT and telecoms works, but there is actually a huge industry powering these type of services. I have spent several years in that career, knowledge of disaster planning, backup connections, redundancy, SLAs, contingency plans are essential skills. Planning for failures, even multiple failures happening at the same time (despite odds worse than obtaining the gauntlet gold crew!) is a given.
A company that offers a web based service to customers, based on the service requiring an always on internet connection from their customer, whom is challenging customers to take part in a time sensitive event which the service provider refuses to cancel/postpone, should only ever do so on the basis that they either a) have a rock solid backup plan or b) are happy to go above and beyond putting mistakes right when they happen.
I’m reading comments here from people that seem to understand either technology or business, or a little of both, but it’s still painfully obvious that DB are able to palm this off as a “global internet issue”, which is simply not true.
For those that understand the relevant IT, take a step back to consider:
- DB chose the hosts (AWS?). Could have been hosted elsewhere.
- Could be hosted with mirrored servers in other countries.
- DB chose the hosting package. Which may include connectivity, uptime commitments, SLA’s etc..
- DB chose the level (or lack of) redundancy. Eg Mirrored servers, backup internet connections via differing technologies (not reliant on the same backbones).
Sure, this all comes at increasing costs, but that’s got to be factored in to DB’s business plan. It is certainly not the customers problem.
Please can I request we also stop bringing 911 call centres into this. I don’t understand the circumstances behind those issues, but it’s completely feasible that the issues were just as avoidable as what we’re seeing here with STT. It may be poor IT management or it may be underinvestment, who knows.
But if you had CenturyLink at home you'd be down too. Also may providers lease their service lines from someone else so even if you had a redundancy, that may have been down too. It doesn't matter how much redundancy you have in place, at some point you can't switch over when so much is down and so many companies lease from other companies, etc. Its crazy and this was definitely an extreme case that impacted Phone and Internet across the country.
So, two providers are down at the same time? And one supports ECCs? Has anyone made sure it was an accident?!?!?
I think the comment was more that DB does not do Call Centers, CenturyLink does and the yes 911 was down in many places.
We did in MA as well.
You're welcome to make comments about us not understanding this particular industry as well as you do, and you're right that I don't know the first thing about this industry, but I'm also not convinced you understand how business works in general.
If DB's expenses go up, you better believe our costs will go up - they chose a price point that allowed them to get the redundancy they could afford considering how much they charge their customers. I'm sure they could probably pay to have the single best damned redundancy service any company could offer, but I'm also sure that the only way they could afford that is by increasing the price points that we pay because that money needs to come from somewhere, and frankly I'm not sure how many of us would be interested in paying $500 for a monthly dilithium pass.
The more 9's you want, the exponentially higher your costs go.
Or raise the Slot Cap by ten, also giving each player ten Slots. Cost? Zero. Goodwill? Priceless.