Warning: file_get_contents(/home/www/settings/mirror_forum_db_enable_sql): failed to open stream: No such file or directory in /var/www/html/content/Forum/functions.php on line 8
I really appreciate you taking the time to write this â truly. Itâs obvious you care about Radio Paradise, and we donât take that lightly.
First, I want to be clear: there was no plan to âgo offline.â This wasnât a scheduled maintenance window where we knowingly shut down the stream. It was backend work intended to strengthen the infrastructure, and unfortunately it triggered unexpected failures. Thatâs on us â but it wasnât a deliberate decision to allow dead air and there was no way to plan for something we had no idea would trigger such issues.
Youâre absolutely right that dead air is painful. William knows that better than anyone. We all do. The stream itself is sacred to us. The moment something goes down, it becomes all-hands-on-deck until itâs restored.
One thing that might not be visible from the outside is how lean we actually are. While there are people who contribute in various roles, our core technical infrastructure is maintained by essentially two engineers and one of those engineers is also the boss and main DJ. They are supporting a global, 24/7, streaming service that operates across web, mobile apps, CarPlay, Android Auto, smart speakers, and multiple audio formats â all without corporate backing, venture capital, or a broadcast network behind us.
We really are a mom-and-pop operation serving a massive worldwide audience. Thatâs part of the beauty â and part of the fragility.
The house analogy wasnât meant to imply neglect or known instability. It was meant to acknowledge that after 26 years of continuous evolution, systems accumulate complexity. Modernizing that complexity is necessary to keep RP viable long-term. Sometimes that means carefully touching foundational pieces. And sometimes, despite testing, something behaves differently in the wild than it does in staging.
Could we build more redundancy? Of course. We are actively working toward greater resilience. But that takes time and resources, and we grow those carefully and deliberately.
There were no ignored red flags. There was no cavalier decision-making. There was careful long hours of work that had unintended consequences â and a team that responded as quickly as possible.
As for revenue impact â yes, outages matter. Weâre fully aware of that. No one here treats this casually. This station is our livelihood and our lifeâs work.
Weâre constantly balancing:
⢠Stability
⢠Innovation
⢠Limited staffing
⢠Financial sustainability
⢠And a global audience that expects perfection
Itâs not corporate radio. Itâs not iHeart. Itâs not Spotify. Itâs a small, fiercely committed team trying to keep human-curated radio alive in a very automated world.
Your feedback is heard. And your expectations come from a place of wanting RP to thrive â which we share.
Weâre not perfect. But we are deeply committed.
Knock it off Randy. Alanna very thoughtfully responded to his thoughts and they had a wonderful exchange of information. Then you inject yourself where you are not needed. You are the dead horse. Just go away.
Anal, much? Be happy the music is back and just let it go. No need to beat a dead horse.
Knock it off Randy. Alanna very thoughtfully responded to his thoughts and they had a wonderful exchange of information. Then you inject yourself where you are not needed. You are the dead horse. Just go away.
Thank you for understanding the spirit in which I wrote my comments, and for responding to the concerns about the outage. I also wish to express regret for my initial characterization of the maintenance as cavalier or careless, and I owe you and William an apology.
It sounds like RP didn't really expect things to go sideways. Shit happens. We all get it. Especially those of us who have full time careers in IT, operations, software development, etc.
In that context, I think it would be terrific and truly enlightening if William and Jarred compiled an "incident post-mortem" report.
There are many examples of post-mortems published by larger institutions like Reddit, Facebook, Microsoft, or Cloudflare.
In brief, they are a chronological narrative that begin with a summary the planned changes, and go on to include: highlights of work-in-progress and completed tasks; the moment when the system broke down in an unplanned or unexpected way; what work was done to restore service; how long it took to restore service; and performance & reliability data as the system started to come back online. Incident post-mortems often conclude with a "root cause analysis" that narrows down and explains the exact reason for the unexpected outage.
As always, "Thanks for Listening."
Anal, much? Be happy the music is back and just let it go. No need to beat a dead horse.
I really appreciate you taking the time to write this â truly. Itâs obvious you care about Radio Paradise, and we donât take that lightly.
First, I want to be clear: there was no plan to âgo offline.â This wasnât a scheduled maintenance window where we knowingly shut down the stream. It was backend work intended to strengthen the infrastructure, and unfortunately it triggered unexpected failures. Thatâs on us â but it wasnât a deliberate decision to allow dead air and there was no way to plan for something we had no idea would trigger such issues.
Youâre absolutely right that dead air is painful. William knows that better than anyone. We all do. The stream itself is sacred to us. The moment something goes down, it becomes all-hands-on-deck until itâs restored.
One thing that might not be visible from the outside is how lean we actually are. While there are people who contribute in various roles, our core technical infrastructure is maintained by essentially two engineers and one of those engineers is also the boss and main DJ. They are supporting a global, 24/7, streaming service that operates across web, mobile apps, CarPlay, Android Auto, smart speakers, and multiple audio formats â all without corporate backing, venture capital, or a broadcast network behind us.
We really are a mom-and-pop operation serving a massive worldwide audience. Thatâs part of the beauty â and part of the fragility.
The house analogy wasnât meant to imply neglect or known instability. It was meant to acknowledge that after 26 years of continuous evolution, systems accumulate complexity. Modernizing that complexity is necessary to keep RP viable long-term. Sometimes that means carefully touching foundational pieces. And sometimes, despite testing, something behaves differently in the wild than it does in staging.
Could we build more redundancy? Of course. We are actively working toward greater resilience. But that takes time and resources, and we grow those carefully and deliberately.
There were no ignored red flags. There was no cavalier decision-making. There was careful long hours of work that had unintended consequences â and a team that responded as quickly as possible.
As for revenue impact â yes, outages matter. Weâre fully aware of that. No one here treats this casually. This station is our livelihood and our lifeâs work.
Weâre constantly balancing:
⢠Stability
⢠Innovation
⢠Limited staffing
⢠Financial sustainability
⢠And a global audience that expects perfection
Itâs not corporate radio. Itâs not iHeart. Itâs not Spotify. Itâs a small, fiercely committed team trying to keep human-curated radio alive in a very automated world.
Your feedback is heard. And your expectations come from a place of wanting RP to thrive â which we share.
Weâre not perfect. But we are deeply committed.
And weâre still here.
Thank you for understanding the spirit in which I wrote my comments, and for responding to the concerns about the outage. I also wish to express regret for my initial characterization of the maintenance as cavalier or careless, and I owe you and William an apology.
It sounds like RP didn't really expect things to go sideways. Shit happens. We all get it. Especially those of us who have full time careers in IT, operations, software development, etc.
In that context, I think it would be terrific and truly enlightening if William and Jarred compiled an "incident post-mortem" report.
There are many examples of post-mortems published by larger institutions like Reddit, Facebook, Microsoft, or Cloudflare.
In brief, they are a chronological narrative that begin with a summary the planned changes, and go on to include: highlights of work-in-progress and completed tasks; the moment when the system broke down in an unplanned or unexpected way; what work was done to restore service; how long it took to restore service; and performance & reliability data as the system started to come back online. Incident post-mortems often conclude with a "root cause analysis" that narrows down and explains the exact reason for the unexpected outage.
I really appreciate you taking the time to write this â truly. Itâs obvious you care about Radio Paradise, and we donât take that lightly.
First, I want to be clear: there was no plan to âgo offline.â This wasnât a scheduled maintenance window where we knowingly shut down the stream. It was backend work intended to strengthen the infrastructure, and unfortunately it triggered unexpected failures. Thatâs on us â but it wasnât a deliberate decision to allow dead air and there was no way to plan for something we had no idea would trigger such issues.
Youâre absolutely right that dead air is painful. William knows that better than anyone. We all do. The stream itself is sacred to us. The moment something goes down, it becomes all-hands-on-deck until itâs restored.
One thing that might not be visible from the outside is how lean we actually are. While there are people who contribute in various roles, our core technical infrastructure is maintained by essentially two engineers and one of those engineers is also the boss and main DJ. They are supporting a global, 24/7, streaming service that operates across web, mobile apps, CarPlay, Android Auto, smart speakers, and multiple audio formats â all without corporate backing, venture capital, or a broadcast network behind us.
We really are a mom-and-pop operation serving a massive worldwide audience. Thatâs part of the beauty â and part of the fragility.
The house analogy wasnât meant to imply neglect or known instability. It was meant to acknowledge that after 26 years of continuous evolution, systems accumulate complexity. Modernizing that complexity is necessary to keep RP viable long-term. Sometimes that means carefully touching foundational pieces. And sometimes, despite testing, something behaves differently in the wild than it does in staging.
Could we build more redundancy? Of course. We are actively working toward greater resilience. But that takes time and resources, and we grow those carefully and deliberately.
There were no ignored red flags. There was no cavalier decision-making. There was careful long hours of work that had unintended consequences â and a team that responded as quickly as possible.
As for revenue impact â yes, outages matter. Weâre fully aware of that. No one here treats this casually. This station is our livelihood and our lifeâs work.
Weâre constantly balancing:
⢠Stability
⢠Innovation
⢠Limited staffing
⢠Financial sustainability
⢠And a global audience that expects perfection
Itâs not corporate radio. Itâs not iHeart. Itâs not Spotify. Itâs a small, fiercely committed team trying to keep human-curated radio alive in a very automated world.
Your feedback is heard. And your expectations come from a place of wanting RP to thrive â which we share.
Weâre not perfect. But we are deeply committed.
Here's the thing though: William is a former radio DJ. He knows exactly how bad "dead air" is. You get a massive drop off in listeners, and it takes a while to recover. That translates into lost revenue.
When a commercial radio stations had to do planned maintenance to backend infrastructure, such as repairing or replacing a rack of audio equipment, they plan ahead with an alternate broadcast to avoid complete radio silence during the actual maintenance window going on behind the scenes. typically this would entail playing a long tape with pre-recorded music.
Alanna is framing this outage as cleaning up a house that's been lived in for 26 years. This definitely implies that the problems were known, and that some kind of maintenance window could've been created.. Jarred could have altered the API to play the same music on a loop for 24 hours along with a recurring announcement recorded by William or Alanna or Josh that certain functions would be unavailable (downloads, comments, voting, etc).
That none of these contingencies were planned or prepared, and that the maintenance went ahead means that there is a critical lack of planning and redundancy happening at Radio Paradise.
I'm a career IT professional; if I made a change like this without announcing it, I'd be seriously reprimanded. Possibly fired.
William is playing a bumper on the main mix where he touts that donations from loyal listeners have been able to find employment for 12 staff members who never have to look for another "real job" because working at Radio Paradise is amazing.
I know it's easy for me to critique the station from here... but I've actually been to the station at Eureka. On the surface, it looks to be a very well-oiled machine and a squeaky-clean operation. And it is abundantly clear that all the staff love the station and want it to keep going forever. Yet... out of those 12 people, did anyone raise ANY alarms or red flags?
This organization is too mature to be making "aw shucks" mistakes. The day long dead-air could translate into lost donation revenue.
I really appreciate you taking the time to write this â truly. Itâs obvious you care about Radio Paradise, and we donât take that lightly.
First, I want to be clear: there was no plan to âgo offline.â This wasnât a scheduled maintenance window where we knowingly shut down the stream. It was backend work intended to strengthen the infrastructure, and unfortunately it triggered unexpected failures. Thatâs on us â but it wasnât a deliberate decision to allow dead air and there was no way to plan for something we had no idea would trigger such issues.
Youâre absolutely right that dead air is painful. William knows that better than anyone. We all do. The stream itself is sacred to us. The moment something goes down, it becomes all-hands-on-deck until itâs restored.
One thing that might not be visible from the outside is how lean we actually are. While there are people who contribute in various roles, our core technical infrastructure is maintained by essentially two engineers and one of those engineers is also the boss and main DJ. They are supporting a global, 24/7, streaming service that operates across web, mobile apps, CarPlay, Android Auto, smart speakers, and multiple audio formats â all without corporate backing, venture capital, or a broadcast network behind us.
We really are a mom-and-pop operation serving a massive worldwide audience. Thatâs part of the beauty â and part of the fragility.
The house analogy wasnât meant to imply neglect or known instability. It was meant to acknowledge that after 26 years of continuous evolution, systems accumulate complexity. Modernizing that complexity is necessary to keep RP viable long-term. Sometimes that means carefully touching foundational pieces. And sometimes, despite testing, something behaves differently in the wild than it does in staging.
Could we build more redundancy? Of course. We are actively working toward greater resilience. But that takes time and resources, and we grow those carefully and deliberately.
There were no ignored red flags. There was no cavalier decision-making. There was careful long hours of work that had unintended consequences â and a team that responded as quickly as possible.
As for revenue impact â yes, outages matter. Weâre fully aware of that. No one here treats this casually. This station is our livelihood and our lifeâs work.
Weâre constantly balancing:
⢠Stability
⢠Innovation
⢠Limited staffing
⢠Financial sustainability
⢠And a global audience that expects perfection
Itâs not corporate radio. Itâs not iHeart. Itâs not Spotify. Itâs a small, fiercely committed team trying to keep human-curated radio alive in a very automated world.
Your feedback is heard. And your expectations come from a place of wanting RP to thrive â which we share.
Weâre not perfect. But we are deeply committed.
Weirdness!
The Beyond 128k AAC stream played 3 of Josh's channel jingles in a row, and after that, it was completely out of sync with the What's Playing list and the web player.
This went on for about an hour until I started writing this message to report it, and guess what⦠right before I was about to hit the Submit button, again 3 jingles were played and suddenly the stream was back in sync with the website!
RP is not a non profit organization. It is a listener supported for profit operation funded by voluntary contributions. People seem to have taken RP for granted based upon many of the comments I have been reading over the past weeks. That it has survived for as long as it has is a tribute to William's vision, skills and dedication to undertake and succeed in a way never before accomplished in "radio" history.
Be grateful, patient and understanding when dealing with your personal inconveniences. These disruptions are in no way planned or desired by anyone that I know of. There is no model for this operation. It is the prototype and hopefully always will be. Having been here all this time I could not even imagined where we would be looking back 25 years ago when I first tuned in on my Windows Me computer listening on winmx. And then wondering how long it would last. Fortunately Alanna has risen to the occasion to carry the torch forward.
Pioneers risk everything to undertake their journey. Just think where internet radio would be without B & R's vision, dedication and perserverence. These are growing pains so be grateful that RP is still growing and not rotting on the vine resting on its laurels. I am.
.
People who donate to nonprofit organizations expect them to be good stewards with their funds and do everything they can to keep the organization healthy and strong.
RP is not a non profit organization. It is a listener supported for profit operation funded by voluntary contributions. People seem to have taken RP for granted based upon many of the comments I have been reading over the past weeks. That it has survived for as long as it has is a tribute to William's vision, skills and dedication to undertake and succeed in a way never before accomplished in "radio" history.
Be grateful, patient and understanding when dealing with your personal inconveniences. These disruptions are in no way planned or desired by anyone that I know of. There is no model for this operation. It is the prototype and hopefully always will be. Having been here all this time I could not even imagined where we would be looking back 25 years ago when I first tuned in on my Windows Me computer listening on winmx. And then wondering how long it would last. Fortunately Alanna has risen to the occasion to carry the torch forward.
Pioneers risk everything to undertake their journey. Just think where internet radio would be without B & R's vision, dedication and perserverence. These are growing pains so be grateful that RP is still growing and not rotting on the vine resting on its laurels. I am.
.
Here's the thing though: William is a former radio DJ. He knows exactly how bad "dead air" is. You get a massive drop off in listeners, and it takes a while to recover. That translates into lost revenue.
When a commercial radio stations had to do planned maintenance to backend infrastructure, such as repairing or replacing a rack of audio equipment, they plan ahead with an alternate broadcast to avoid complete radio silence during the actual maintenance window going on behind the scenes. typically this would entail playing a long tape with pre-recorded music.
Alanna is framing this outage as cleaning up a house that's been lived in for 26 years. This definitely implies that the problems were known, and that some kind of maintenance window could've been created.. Jarred could have altered the API to play the same music on a loop for 24 hours along with a recurring announcement recorded by William or Alanna or Josh that certain functions would be unavailable (downloads, comments, voting, etc).
That none of these contingencies were planned or prepared, and that the maintenance went ahead means that there is a critical lack of planning and redundancy happening at Radio Paradise.
I'm a career IT professional; if I made a change like this without announcing it, I'd be seriously reprimanded. Possibly fired.
William is playing a bumper on the main mix where he touts that donations from loyal listeners have been able to find employment for 12 staff members who never have to look for another "real job" because working at Radio Paradise is amazing.
I know it's easy for me to critique the station from here... but I've actually been to the station at Eureka. On the surface, it looks to be a very well-oiled machine and a squeaky-clean operation. And it is abundantly clear that all the staff love the station and want it to keep going forever. Yet... out of those 12 people, did anyone raise ANY alarms or red flags?
This organization is too mature to be making "aw shucks" mistakes. The day long dead-air could translate into lost donation revenue.
My thoughts exactly. As a user and code contributor to the Lyrion Music Server open source project, I am very disappointed in the seemingly haphazard way that the RP API has repeatedly been broken, fixed, broken again, changed, and otherwise bounced around for the past month or so. This is not housecleaning. It is uncontrolled chaos. Many Lyrion users, including myself, are becoming very frustrated and I have no doubt that many are just moving on to other more stable, if less excellently curated, streaming sources. Please try to do better. We are all rooting for you.
I think that an in-depth "post-mortem" report would go a long way to helping the community understand what caused the outage, and gain a deeper appreciation of the complexity of RP, and whatever was being cleaned up under the hood.
In case anyone's resting on their laurels thinking it's all good now, I'm still getting disruptions. It happened on both the iOS app and Lyrion Music Server. Here's the error that the RP plugin for LMS just threw:
Plugins::RadioParadise::ProtocolHandler::new (29) We seem to be in a redirection loop for url: https://audio-geo.radioparadise.com/dj/4/10014873.flac
Edit: I will say that things are slooowly getting better, so it's at least going in the right direction. Trying not to sound like a whiny little bitch.
I'm glad to hear that this was a planned outage. I suffered a bit for a day but almost everything seems to be back to normal today (website, Mac OS App, and IOS App) with one major exception. On the IOS App, the download still produced the following error when attempting to play the downloaded file - "Failed to start local playback. Please try again." Multiple retries does not help unfortunately. Any ideas why this is happening or when it will be fixed?
@dryan67, please take care not to confuse planned maintenance with a planned outage.
When properly documented and rehearsed, plannedmaintenance can be done on live systems without causing a noticeable or extended disruption.
A plannedoutage, on the other hand, means: "we have run all reasonable scenarios for this work, and there's no way we can do it without disrupting the live service for more than a few seconds, therefore we are planning to take it completely offline as part of the work."
This was planned maintenance with an unplanned outage.
Despite what @lovehorn wrote (and later deleted), I am not "emotion-express on steroids" and I don't think my comments can be construed as "swearing." These are the frustrations of a 20-year career IT systems administrator who has done his fair share of planned and unplanned maintenance, as well as surviving the stress of both planned and unplanned outages.
In my line of work, I am always expected to have a "rollback plan," unless the work is a "one way" fix that can't be rolled back. This means, if an outage occurs, there needs to be a way to revert to the previous settings to restore service as quickly as possible. Computer code doesn't burn up or vaporize like real hardware. That's what backups are for.
I have also been listening long enough to see RP and its sister station, SomaFM, go through some outages before. Rusty, for his part, seems to do pretty good learning lessons from unplanned outages.
When RP posted an IT job 2-3 years ago, I really wish I could have just up and left Los Angeles and come to work for them instead. But I wasn't in a position to just leave the area and move my entire life up north.
So instead, I am simply pleading for more careful planning in the future so that everyone can enjoy the service for years to comeâlong after William has spun his last record.
I'm glad to hear that this was a planned outage. I suffered a bit for a day but almost everything seems to be back to normal today (website, Mac OS App, and IOS App) with one major exception. On the IOS App, the download still produced the following error when attempting to play the downloaded file - "Failed to start local playback. Please try again." Multiple retries does not help unfortunately. Any ideas why this is happening or when it will be fixed?
I'm glad to hear that this was a planned outage. I suffered a bit for a day but almost everything seems to be back to normal today (website, Mac OS App, and IOS App) with one major exception. On the IOS App, the download still produced the following error when attempting to play the downloaded file - "Failed to start local playback. Please try again." Multiple retries does not help unfortunately. Any ideas why this is happening or when it will be fixed?
This is a software system set up by an IT self educated DJ
without a necessary implementation of any of the industry-standards you mention.
Plus unexpected growth.
Plus ...
Be nice!
Be thankful!
I am a recurring donor for several years now. That is how I express my thanks.
I am also allowed to express my frustration. William ostensibly hired all these people to help out because the system has reached some growth limit.
People who donate to nonprofit organizations expect them to be good stewards with their funds and do everything they can to keep the organization healthy and strong.
That includes not jeopardizing future revenue streams.
Here's the thing though: William is a former radio DJ. He knows exactly how bad "dead air" is. You get a massive drop off in listeners, and it takes a while to recover. That translates into lost revenue.
When a commercial radio stations had to do planned maintenance to backend infrastructure, such as repairing or replacing a rack of audio equipment, they plan ahead with an alternate broadcast to avoid complete radio silence during the actual maintenance window going on behind the scenes. typically this would entail playing a long tape with pre-recorded music.
Alanna is framing this outage as cleaning up a house that's been lived in for 26 years. This definitely implies that the problems were known, and that some kind of maintenance window could've been created.. Jarred could have altered the API to play the same music on a loop for 24 hours along with a recurring announcement recorded by William or Alanna or Josh that certain functions would be unavailable (downloads, comments, voting, etc).
That none of these contingencies were planned or prepared, and that the maintenance went ahead means that there is a critical lack of planning and redundancy happening at Radio Paradise.
William is playing a bumper on the main mix where he touts that donations from loyal listeners have been able to find employment for 12 staff members who never have to look for another "real job" because working at Radio Paradise is amazing.
I know it's easy for me to critique the station from here... but I've actually been to the station at Eureka. On the surface, it looks to be a very well-oiled machine and a squeaky-clean operation. And it is abundantly clear that all the staff love the station and want it to keep going forever. Yet... out of those 12 people, did anyone raise ANY alarms or red flags?
This organization is too mature to be making "aw shucks" mistakes. The day long dead-air could translate into lost donation revenue.
What you're missing:
This is a software system set up by an IT self educated DJ
without a necessary implementation of any of the industry-standards you mention.