4 Software Engineers Share the Biggest Technical Challenges They’ve Faced

[ad_1]

software engineering challenges
shutterstock

In recent interviews with four local engineers, Built In Austin asked them about a problem they all dealt with.

Their response? They figured out, in one way or many ways, a path to solve it.

And while that might seem overwhelmingly obvious, the four also reflected on what they learned from their challenges — the biggest highlight of the problem-solving process. 

“While we, as a team, certainly believe in the sentiment that you shouldn’t reinvent the wheel, there are certainly times where it makes sense,” Loomy’s Engineering Lead Ari Summer said. 

“Although I was aware of these concepts before working on this feature, this opportunity allowed me to fully understand the implementation of these concepts in a real-world scenario,” AdAction’s Senior Full Stack Developer Dusty Christy said. 

“This challenge gave me a chance to grow my skillset rapidly in a way that I wouldn’t have been able to at a larger company like Facebook or Google,” REX’s Software Engineer Colt McNealy said. 

“The project itself helped my team and I gain more practical experience in coordinating cross-team efforts and appreciation for the challenges they create,” ThousandEyes Engineering Leader John Shields said. 

Below, the four local engineering leaders go into more detail about their problem at hand, the solutions they uncovered and the lessons they learned. 

 

Ari Summer

Engineering Lead

Summer, an engineering lead at the brand success platform Loomly, led us through how his team solved for reliably serving their static assets during a rolling deploy, a process Summer said can be difficult to navigate. 

 

What’s the biggest challenge you’ve faced recently in your work? 

One tricky challenge we faced was related to reliably serving our static assets (JavaScript, CSS, images, etc.) for our Ruby on Rails app during a rolling deploy process. Rolling deploys are tricky because, during the deploy process, you are temporarily and simultaneously serving traffic from both old and new versions of your app as machines are gradually updated with the new version. If you’re not careful, it can lead to unexpected consequences when deploying updates.

When it comes to serving static assets during a rolling deploy, you need to make sure to serve both the old and new assets during the deploy process since a client could be requesting either during the deploy. 

 

How did you and your team overcome this challenge in the end? 

One way to solve this problem is to maintain old and new versions of assets on a CDN (content delivery network). We use Cloudfront, backed by S3. In addition to removing the burden of maintaining old and new versions of assets on your machines serving app traffic, it reduces the load to those machines and provides edge caching for faster load time for our users.

In order to get our compiled assets on to S3 to be served via Cloudfront, we started using asset_sync, a Ruby gem that integrates with Rails to upload your assets to S3. We added this to our CI pipeline to upload assets before we started the rolling deploy process. This worked well for some time, but asset_sync didn’t provide an easy way to customize the configuration for our different environments (development, staging, production) and it also didn’t provide a way to delete old, unneeded assets from S3, allowing old assets to pile up in our S3 bucket. 

We ended up taking inspiration from asset_sync and building our own library for uploading and maintaining our static assets on S3. This has allowed us to easily upload to different buckets for our different environments and to easily retire old assets after a configured amount of time. Thinking this could be useful to others, we have started to extract our work into an open-source gem called S3AssetDeploy. 
 

There are certainly times where it makes sense to build your own solution for your use case if what’s out there doesn’t quite fit your situation.”

How did this technical challenge help you grow as an engineer or help you strengthen a specific skill? 

While we, as a team, certainly believe in the sentiment that you shouldn’t “reinvent the wheel,” there are certainly times where it makes sense to build your own solution for your use case if what’s out there doesn’t quite fit your situation. The hard part is choosing when this makes sense. Doing so gives you the freedom to gear your solution to your needs, but comes with the responsibility of maintenance and upkeep. It’s important to have your needs clearly defined before diving into a custom solution and working with what’s already out there can really help in providing some of that clarity.

 

thousandeyes
thousandeyes

John Shields

Engineering Leader

Shields, an engineering leader at network intelligence company ThousandEyes, led us through the practical and technical solutions he and his team implemented for a migration that required zero downtime for users. 

 

What’s the biggest technical challenge you’ve faced recently in your work? 

We were recently migrating our primary customer-facing web application and API from an in-house data center to AWS. Since this application is high volume and directly customer-facing, we were required to perform the switch with zero downtime and no lost user sessions. The application is deployed on Kubernetes, so we were able to have the cluster span both data centers to allow the same application deployments to be available in both data centers. The tricky part was performing the network switch and moving from an internal F5 load balancer to an AWS application load balancer. This also required our canary mechanism to change, which also needed to be verified.

The technical challenges were interesting but the part that made it particularly tricky was the combination of zero downtime and the coordination of multiple teams within the company.

 

How did you and your team overcome this challenge in the end?

In the end, the way we overcame the challenges in this migration was part technical and part practical.

On the technical side, we were able to leverage AWS peering to provide a single Kubernetes cluster across both data centers. This allowed us to use the same deployment for each application, which made it easy to maintain sessions and ease deployment complexity. We also leveraged multiple ingress controllers in the Kubernetes cluster to support different canary mechanisms for the F5 traffic versus the AWS ALB traffic. Lastly, we utilized a temporary DNS configuration to allow us to fully test the new AWS load balancer prior to the switch. All of these (and other) technical approaches allowed us to make these production changes with confidence and with the ability to easily rollback if needed.

On the practical side, this effort required much coordination among our SREs, network engineers, application developers and engineering managers. We were able to create helpful project plans and runbooks for performing the migration. We utilized these written documents to align all of these groups and ensure everyone knew their roles and responsibilities.
 

The project itself helped my team and I gain more practical experience in coordinating cross-team efforts and appreciation for the challenges they create.”

How did this technical challenge help you grow as an engineer or help you strengthen a specific skill?

My team was responsible for the overall coordination along with the technical aspects of the application deployment, new ingress configuration and new canary support. This effort allowed my team and I to gain a deeper knowledge of the networking details of both Kubernetes and AWS. We also went into much detail around various canary mechanisms and understanding how to leverage HTTP standards to make this work with different network topologies.

The project itself helped my team and I gain more practical experience in coordinating cross-team efforts and appreciation for the challenges they create.

 

Colt McNealy

Software Engineer

As a member of the infrastructure/DevOps team at digital platform and real estate brokerage REX, McNealy and co. attempt to solve macro problems that will affect other engineering teams in the future. His most recent challenge? Ensuring the interactions between all of their mircoservices run smoothly. 

 

What’s the biggest technical challenge you’ve faced recently in your…

[ad_2]

Read More:4 Software Engineers Share the Biggest Technical Challenges They’ve Faced