- Priority - Critical
- Affecting System - Voice System
Some voice services are offline. Engineers are investigating the issue.
Reported At: 5 Nov 2018 09:30 NZDT
16:28 Toll free is now working again and CVS is stable.
15:12 We are currently investigating inbound calls to Tollfree (0800 and 0508) not working on CVS.
14:45 Both Inbound and Outbound calls are now proceeding, and being closely monitored.
13:55 CVS registrations have come back on line and stable. Both inbound and outbound calls are still failing.
13:16 Engineers advise that they are now starting to restore voice services
12:56 Engineers have a resolution being implemented now, and is being closely monitored
12:38 Engineers are still working with the Vendor, to resolve the issue.
12:18 Engineers are still working with the Vendor, to resolve the issue.
11:54 Working on resolution with Vendor. Registrations and calling still down.
11:38 Engineers believe they have found the cause of the issue and are working on a resolution, Some voice services are offline. Engineers are investigating the issue.
11:10 Registrations down, monitoring issues.
10:39 Registration is coming back online but all calling is still down.
10:17 Systems are coming back online, and being closely monitored
9:58 Engineers are still investigating the issue.
- Date - 05/11/2018 09:30 - 05/11/2018 17:00
- Last Updated - 07/04/2019 18:19
- Priority - Critical
- Affecting System - Voice Sevices
Update - 12/06/2018
13:21 - All outbound and inbound calls are stable now. We are continuing to monitor.
12:49 - All outbound and inbound calls are currently working fine. We are continuing to monitor and investigate.
11:54 - All outbound and inbound calls are back and working now. We are continuing to investigate.
11:47 - All inbound and outbound calls are failing again. Engineers are continuing to investigate.
11:01- Outbound calls continue to work fine. Some inbound calls are still failing, we are continuing to increase capacity to mitigate inbound call failures.
10:24 - Outbound calls are currently working. We are continuing to actively manage Inbound call stability.
10:00 – We are continuing to investigate the underlying cause of the issues which appear to be load related. We have not seen any evidence of a denial of service attack at this stage. One possible contributor is the use of “callflow to callflow” recursive loops. When no devices are registered, inbound calls are passed between two looping callflows without a device to break the loop. Where these have been identified, we have “broken the loop” and informed those wholesalers. We are continuing to work with the vendor to try to identify additional causes of the massive load and working on increasing resources.
Update - 11/06/2018
16:25 - All inbound and outbound calling remained stable for the past hour. Engineers are continuing to keep this under close observation.
15:13 - Inbound and outbound calls are currently working, some intermittent delays on post dial. We are continuing to work with the vendor on this issue.
14:36 - Issue seems to have re-occurred. Inbound and outbound calls are failing. We are continuing to work with the vendor on this issue.
13:26 - Servers under heavy load as call volumes and re-registrations are higher than normal. As a result some calls getting post dial delays. We are continuing to work on the issue.
12:58 - All calls working, Engineers are monitoring stability.
12:40 - Issue is still being investigated
12:00 - There are no new updates yet, this is still being investigated
CVS outage no inbound and outbound calls, Voice engineers are investigating.
No Inbound or Outbound Calls
- Date - 11/06/2018 11:20 - 11/06/2018 16:25
- Last Updated - 05/11/2018 09:40
- Priority - Medium
Planned maintenance on the Icecold Shared Hosting Services is due to take place from 6/11/2012 22:00 till 7/11/2012 2AM.
We don't expect the outage to be more than 15 minutes.
This Planned maintenance will improve our performance and SPAM Filter System.
We do apologize for the inconvenience this may cause.
Icecold Hosting Services Team
- Date - 05/11/2012 22:00 - 08/11/2012 12:40
- Last Updated - 05/11/2012 23:19
- Priority - Critical
- Affecting System - Data Center
Data center incident report:
At approximately 11:15AM 30/07/2012 our network began experiencing an internal network failure.
A number of subnets on the HD network started experiencing sustained connectivity loss. HD network engineers started investigating the root cause immediately to identify where the issue was residing that was resolved at 12:10PM.
The HD external network was online, upstream was accepting our BGP from all four redundant routing paths.
Upon further investigation, it was determined that the root cause of the issue was a rogue interconnect subnet between our edge routers (PE) and the customer facing Juniper Switch Stack (CE).
Engineers had to scroll though CE (Juniper Switch Stack) datacentre facing switches to identify an undocumented rogue route to an IP address that FG1 failed to announce to CE correctly at 11:15AM, this was triggered by routine change in the Fortigate in question that triggered CE to static route to this IP that should have no longer been active within our network, we accept this was an oversight on our part.
HD takes this event very seriously and is taking proactive steps to eliminate any future possibility of service disruption.
Network engineers have begun performing a thorough audit of all interconnect HA fail-over IP's and customer facing VLAN networks to ensure that this type of oversight can never happen again.
We understand that your business may have been disrupted and we sincerely apologise for any inconvenience that this may have caused you.
We expect the audit to be completed within 72 hours and in the meantime we are actively monitoring our network for any more potential issues.
- Date - 30/07/2012 19:22
- Last Updated - 18/08/2012 17:15