The post We have become a Meta Certified company! appeared first on SeekandHit.
]]>The Meta Certificate is a global endorsement granted by Meta when proving proficiency and professionalism in executing online campaigns on Meta platforms. It comes in two forms: individual certifications for roles such as account managers, and company-level certifications earned through the collective effort of employees.
Company-level certification, like the one we’ve achieved at SeekandHit, is intended for organizations utilizing Meta’s platforms and strategies in their digital marketing endeavours, as we do in performance marketing. Certificates can focus on various areas such as media (planning and buying), analytics, creativity, community management, and content creation.
To attain company-level recognition, certain criteria must be met. In our case, for the Meta Media Certified Company, we needed to accumulate a minimum of 20 Meta Certified Media Buying and/or Media Planning certificates or ensure that at least 20% of employees held these certifications, depending on the size of the company or department.
The certification journey begins with individual achievements. Each team member must first pass their respective certification exam. Once the team collectively meets the requirements, the company can apply for recognition at the company level.
That is why we take immense pride in the large number of Meta Certified team members and the joint effort that enabled us to secure the Meta Certified Company certificate.
It starts with studying Meta advertising materials, which typically takes around five hours. It comes in handy to have experience with working on Meta’s platforms, as it provides a concrete foundation for understanding the material.
Next, applicants register for the exam, pay the exam fee and prepare the necessary technical requirements. The exam itself lasts 105 minutes and is conducted under the supervision of a proctor. While some questions may appear ambiguous, mastering the study materials generally ensures success. I experienced the examination process as challenging, but in the end, it was less stressful than what I originally expected.
The certification serves as Meta’s acknowledgement of SeekandHit’s expertise in digital media and commitment to expanding businesses on its platforms. Our marketers are recognized as true experts in leveraging Meta for advertising, resulting in more efficient campaigns.
For our company, it’s a testament to the value we place on employee education and professional development. Currently, we count on more than 100 certificates across Meta, Google, and Microsoft platforms, showing off our skills in digital marketing advertising, tracking and analytics.
P.s. A couple of years ago, our colleague Joško, wrote about Blueprint certificates. Most of the content is still frequent, so if you’re interested in getting a certification, take a look!
The post We have become a Meta Certified company! appeared first on SeekandHit.
]]>The post How to get started in the world of digital marketing appeared first on SeekandHit.
]]>I’d start by looking into online courses. Platforms like Coursera, Udemy, and HubSpot Academy offer comprehensive digital marketing courses. Look for courses on topics like SEO, social media marketing, email marketing, content marketing, and analytics. You don’t have to dive too deep into these, but gaining an understanding of the world of digital marketing and all the various areas you can specialize in, can provide you with a general direction where you want to take your career.
Another great way to do this is by reading through and earning the following certifications: Google Analytics, Google Ads, and Facebook Blueprint. Not only can these certifications enhance your credibility when trying out for a position in Digital Marketing, some companies (SeekandHit included) insist you pass them in order to keep a certified professional staff at all times. Once you earn these, you will need to retake them on a yearly basis in order to keep your certified badge.
Additionally explore the world of Paid Advertising. Specifically, familiarize yourself with platforms like Google Ads and Facebook Ads. Learn about keyword targeting, ad creation, and budget management. If you can, experiment with small campaigns to understand how paid advertising works.
Building a bit more on what I already mentioned, understanding the Basics is crucial at this starting point. So, get familiar with the fundamental concepts of digital marketing, including SEO (Search Engine Optimization), SEM (Search Engine Marketing), social media marketing, content marketing, and email marketing.
If you are interested in the Website-side of things, try covering basics like: learning about website design, user experience (UX), and conversion optimization.
In order to take that extra step, if your own personal circumstances let you, a great way to gain practical experience is through internships. And guess what, we offer those! Get in touch here – https://seekandhit.com/contact/. Or you can try freelancing; real-world projects will give you insights that theoretical learning alone cannot provide. If you go this route, don’t forget to create a portfolio down the line. Document your project and achievements online, this can be invaluable when applying for jobs or freelance opportunities.
Somewhere down the line, hone in your analytical skills by learning Data Analysis. Familiarize yourself with analytics tools such as Google Analytics. Understanding how to track and analyze website traffic, user behavior, and campaign performance is crucial in data-driven decision making.
Join Online Communities: Participate in digital marketing forums, groups, and communities. Platforms like LinkedIn, Reddit, and Twitter have active communities where professionals share insights and opportunities.Attend Events: Attend webinars, conferences, and local meetups to connect with industry professionals. Networking can open doors to new opportunities. There is a great community of tech professionals in Split called Split Tech City, be sure to check them out and join their online and offline activities, there’s something for everyone – https://split-techcity.com/.
I can’t stress this enough. A few ways to stay updated is to follow industry blogs: Subscribe to blogs and websites that regularly publish updates on digital marketing trends. This will help you stay current with the latest tools and strategies. Here’s a few links to get you started:
https://www.marketingdive.com/
https://www.searchenginejournal.com/category/paid-media/pay-per-click/
https://searchengineland.com/
Another great way to stay up to date I use, are podcasts and webinars: listen to podcasts and attend webinars hosted by industry experts. This is a convenient way to learn from experienced professionals. My current favorites are: The Digital Marketing Podcast by Target Internet and The Marketing Millennials by Daniel Murray.
In the end, remember that hands-on experience is crucial in digital marketing. Educating yourself and learning about digital marketing is a great way to get started in the industry, but if possible, try applying what you learn to real-world projects or create your own projects to build a practical understanding of digital marketing strategies.
Remember that digital marketing is a dynamic field, so ongoing learning and adaptation are essential. Be proactive, test different strategies, and continuously refine your skills based on industry developments.
Stay curious and good luck
The post How to get started in the world of digital marketing appeared first on SeekandHit.
]]>The post How to boost your team’s performance by timely code review appeared first on SeekandHit.
]]>If you have ever worked in a team as a developer, code review should be familiar to you. However, if you work alone or need a refresher, let’s start from the beginning to understand what code review entails.
Code review is a process where peers review and provide comments on the code written by a developer. It serves to improve code quality, readability, and also helps the reviewer gain a better understanding of the code and the associated feature or product. Additionally, code review acts as the first line of defense for quality assurance, as the reviewer can identify inconsistencies or bugs in the code.
Before examining the code, it is a good practice to test the modified code. This can involve clicking around on the frontend or making a simple request on the backend. These initial tests can help guide the reviewer in identifying potential code issues and what to be cautious of.
The next step is to examine the code thoroughly. During the code review, nothing should be overlooked. Sometimes, the reviewer’s suggestion may not be the better solution compared to another option, but any discussion is welcome. This helps both the reviewer and the author to think about the codebase and best practices.
When writing comments, it is important to be clear and concise. Comments that simply state “This is terrible” or “Please change” can confuse the author and lead to unnecessary additional changes. Instead, provide specific feedback or ask clear questions to ensure effective communication.
Additionally, it is important to consider the bigger picture. Keep in mind that you are not only reviewing a new slice of code, but also evaluating how that code fits into the entire project. This means that there may be instances where a line of code is not universally optimal, but it is the best choice for the project as a whole.
If you have been working in an agile team, you are probably familiar with the task lifecycle. Once a task is created, it is added to the Backlog. When the team decides to work on it in a sprint, it is moved to the To Do column. Then, it progresses through the In Progress and Review&QA stages. Finally, when the task is completed, it is moved to the Merged and Deployed stages.
We can group these stages into 3 phases in our development process: pre-development, development, and post-development. Pre-development consists of two parts: Backlog and To Do. It is straightforward – when a task is selected for development, it is added to the To Do column.
The development process involves going back and forth between the “In Progress” and “Review&QA” stages. Once the assignee completes a task, it is sent for review and quality assurance. If any fixes are required, it goes back to the “In Progress” stage and the process continues until it is completed.
After the task has been thoroughly reviewed and confirmed to be free of any bugs, it is then moved to the “Merged” column. In this column, the task awaits its release and is prepared to be moved to the “Deployed” column. This transition marks the final stage of the task’s journey, as it is now ready to be deployed and made available to users.
As developers, we do not always have the choice to create tasks or put them in the “To Do” list. Additionally, even when someone starts working on a task, as a team, we may not have control over how quickly it will be completed. However, after that comes the Review stage, which can be seen as the Death of Productivity.
Sometimes, when you have a lot of work, you may forget to review something if it wasn’t a top priority task. The developer who was assigned to the task might remind you to review it after a few days. If this happens multiple times, the task can remain in the “Review” column for up to 10 workdays, even though it could be reviewed within 2 hours at most.
This leads to high-priority tasks being reviewed promptly, while other tasks are left in review limbo, waiting for someone to review them. This negatively impacts team productivity, as it appears that no releases have been made for a long time. Additionally, it is exhausting for the person assigned to the task, as they have to constantly think about it even after completing it and have to remind colleagues to review it.
You have just completed a significant task and feel extremely satisfied with the work you have done. You eagerly send the merge request for review, expecting approval from your peers. However, as time passes – one hour, two hours, even a whole day – there is no feedback or approval. The initial excitement of completing the task begins to fade, and you move on to other pending tasks.
Just as you immerse yourself in a new task, comments on the merge request suddenly start flooding in. The previous sense of satisfaction now becomes a hindrance, impeding your progress and momentum.
As reviewers, we often overlook the fact that there is a person behind the code who invests time and effort into their work. It is not only important for them to receive timely code reviews, but also to receive constructive and concise feedback.
Here are a few guidelines to follow when adding comments:
As mentioned previously, the most time-consuming part of the task’s lifecycle is the Review & QA stage. This is a process that developers can greatly influence. The key is to ensure timely reviews, so that tasks do not have to wait for several days before being reviewed and potentially requiring additional fixes.
We have decided to set up a daily Slack reminder that prompts us to “Review merge requests in the Review column.” While this reminder alone will not solve everything, as a team, we need to prioritize reviewing tasks right after our daily standup meeting. The purpose of this Slack reminder is to serve as a gentle reminder and encourage us to make reviewing a priority every day.
We examined GitLab metrics and found that, although they may not be entirely accurate (due to some tasks lingering in review for an extended period of time for reasons beyond our control), the average time from the creation of a merge request to its merging is 9 days.
It takes almost 2 weeks from the creation of a merge request until it is merged. We had hoped for a faster turnaround, especially for smaller merge requests, which ideally should take less than a day.
After one month of utilizing daily reminders on Slack and dedicating more time to reviewing merge requests, we have significantly increased our productivity. Now, almost all merge requests are merged within 3 days, ensuring that tasks do not collect dust in the Review column and enabling faster and more frequent releases.
The mean value for September and October was similar. However, the average value changed significantly, indicating a decrease in the number of extreme values (merge requests that linger in the Review Column).
In summary, after only one month we managed to boost our team’s happiness and productivity. By giving it a thought and after discovering a flaw in our team where plenty of tasks were awaiting its review, our friendly reminders and daily dedicated time to task/code checkups, ensured that first, we got faster, but also better feedback; second, we dedicated time to be there for our colleagues and prompt team dynamics; and third, we eliminated the need of thinking about a simple task for weeks. Definitely, a thing to try if you recognised yourself or your team in here
If you liked this topic, our docs-as-code blog might interest you!
The post How to boost your team’s performance by timely code review appeared first on SeekandHit.
]]>The post Unveiling the impact of currency value with Google Ads appeared first on SeekandHit.
]]>We conducted this experiment on one of our longest-standing client’s accounts. We thus knew with great precision how their results vary, and which metrics, results and bids are usual for their campaigns. In this specific case, we were to run a flight campaign, a type of campaign that usually lasts for a short period of time, this time, spanning a month and a half. What set this campaign apart was the idea to activate identical display and video campaigns, on two different accounts with distinct currencies – Euro and Hungarian Forint – a currency that is stable enough and has a much smaller value than the Euro. Join us as we dig into the fascinating results of the experiment, revealing the power of currency value in the world of advertising.
The stage was set, and the campaigns were launched simultaneously, running with different currencies.
Already in the very beginning, we noticed the difference in campaigns running in Hungarian Forints. While on display campaigns we could bid more precisely, on video campaigns we were able to bid in values smaller than €0.01 cost-per-view (CPV) which wasn’t possible in the case of euro campaigns due to Google Ads technical limitations.
Actually, we had already observed this in the transition from KN to EUR. While advertising with Kunas we could set CPV to KN0.01 and in euros, the smallest amount we can bid is €0.01 which entailed a rise of 7.5x higher bid (and of course relatable loss).
So now, in the case of the Hungarian forint, we could set CPV on HUF1 or HUF0.5 etc. which is a significantly smaller amount than €0.01.
Just to give you an idea of how these currencies differ…
€0.01 = HRK0.07 = HUF3.88
HUF1 = €0.0026 = HRK0.0195
HRK1 = €0.13 = HUF50.93
And here lies the difference. Frequently, we work with predetermined budgets for flight campaigns that can result in certain campaigns reaching a “limited by budget” status. This is particularly common with video campaigns, where the smallest possible CPV set in Euros (€0.01) usually is too high for the given budgets. So bidding in currency like Hungarian Forint allows us to find the right balance by identifying optimal bids (small enough bids) that maximize performance within the given budget constraints.
So, if bidding in another currency would avoid a “limited by budget” status and would provide better results and money savings by letting us bid in smaller amounts, this could be a game changer for the whole advertising industry.
We did our best in optimizing campaigns on both accounts and here are the final results:
Original campaigns (EUR) | Test campaigns (HUF) | Difference (%) | |
---|---|---|---|
Cost | €2,734.31 | HUF1,010,418.42 (€2,691.37) | -1,57% |
Impressions | 2,725,060 | 4,274,425 | 56.68% |
Clicks | 19,677 | 29,539 | 50.11% |
Views | 243,714 | 545,726 | 123.92% |
Avg. CPC (display campaigns) | €0.09 | HUF 23.48 (€0.06) | -33.33% |
Avg. CPV (video campaigns) | €0.005 | HUF0.64 (€0.0017) | -66% |
The above table clearly demonstrates that, for a quite similar invested amount, the test campaigns yielded nearly double the results of the original campaigns. Notably, the average cost per click (CPC) decreased by 33.33%, while the average cost per view (CPV) decreased by 66%, signifying cost-effectiveness within the Hungarian Forint account.
In practical terms, this means long-term savings.
While the results are undeniably evident, we consider this tactic cost-effective when looking to get the cheapest possible traffic. On more expensive markets, namely, markets that have higher cost per metric, the difference in savings will probably be smaller.
Notably, after carefully analyzing our campaigns over time, we made an interesting observation: campaigns set in Hungarian forints seem to attract a higher number of invalid clicks compared to campaigns set in Euros. Invalid clicks refer to interactions with ads that do not stem from genuine user interest, encompassing both intentionally unauthorized traffic and accidental or duplicate clicks. While the discrepancy in invalid clicks between these currency settings did not significantly impact our overall performance, it is an observation that is worth attention.
Nevertheless, these findings have given rise to multiple questions:
As of now, we opt to keep these questions open, reserving them for further exploration and akin experiments. The journey of discovery continues.
The post Unveiling the impact of currency value with Google Ads appeared first on SeekandHit.
]]>The post London’s MeasureCamp: A Metrics Wonderland appeared first on SeekandHit.
]]>Since we previously published a blog post about MeasureCamp in Copenhagen, where our dear colleague Nikolina explained it, I won’t delve into what MeasureCamp is. If you’re interested, feel free to check out her blog to find out how it was in Copenhagen.
Also, the cool thing is that you can see Measure Camp from two different perspectives. Copenhagen from a Performance Marketing perspective (as Nikolina works in Marketing) and London from Tracking perspective.
Yeah yeah, I’m the tracking enthusiast here!
So, London MeasureCamp was incredibly interesting, inspiring, and, most importantly, fun!
There was a lot about GA4 since it is a hot topic now in the marketing and data world, BigQuery, Consent Mode and some motivational sessions.
Let me share with you some of the topics that we found most useful and interesting:
For all the GA4 enthusiasts out there, this session was particularly intriguing because it brought to light the fact that we all share similar challenges. Some of us have warmed up to GA4, while others have reservations, but we all agree that this transition period can be somewhat challenging.
People often compare the features UA had with what GA4 lacks, but it’s essential to remember that GA4 is a different tool, and we will adapt to it. Here are some pros and cons discussed during this session:
We also agreed that this is more difficult for people that really worked a lot with Universal Analytics, and that for us who are kind of newbies (like me) in this world, GA4 is totally fine
Julius’s YouTube videos are one of our go-to resources when it comes to learning about GA4 and tracking. Check it out -> Analytics Mania
He shared a few valuable tricks for GA4, including information about the common issue of missing the session_start event. Some mentioned that Google is aware of this issue and is working on fixing it.
Also, something very interesting is that GA4 has some weird calculations when it comes to currency if it is not USD.
For instance, if your GA4 property is set up in EUR and you’re sending data in EUR, you might expect Google to handle it correctly. But, surprisingly, GA4 receives the currency in EUR, converts it to USD, and then back to EUR.
And that is why you can see weird numbers sometimes, e.g. if the value is 50 EUR and your property is in EUR, sometimes you will receive it as 49.9. And there is no explanation for it. Something you should be aware of when analyzing the data in GA4.
And of course, sampling. Note that standard reports should be unsampled but in the Explorations you should always check if your report is sampled.
Really useful for those who struggle explaining to clients what is a consent mode. Also the funny thing the guy mentioned is that Google should name it differently because this way it is often mixed with Consent Mangement Platform/pop-up.
Probably all of you had the similar problem, right?
Something to highlight is that we found out that in GA4 you can change reporting identity and it applies to historical data as well. Nice
One more thing to add – be aware that Behavior modelling will not work as expected if you have a lot of data from non-consetend users. So more data from non-consented users means higher chances of mismatch.
If you’re working in tracking, you’ve probably heard of this guy – the one who created the Analytics Debugger extension, which we all find incredibly helpful in our daily work. If you haven’t tried it yet, you definitely should -> Google Analytics Debugger.
He discussed his new feature (that is still partially in progress) that allows anyone to set up a personalized public endpoint capable of receiving Google Analytics 4 Payloads, similar to a Server-side Google Tag Manager (SGTM) endpoint.
The main goal of this tool is to ensure the collection of highly accurate and pristine data for your GA4 implementation. If you’d like to delve deeper, you can check it out here: Analytics Firewall Tool.
Do I need to introduce this guy at all?
Or should I just post this picture and let it be?
During his session, Simo Ahava shared his career journey and introduced us to his Simmer project. He left us feeling motivated to learn more, aspire to achieve greater heights, and openly discuss our challenges.
If you’re not familiar with Simo Ahava, do yourself a favor and make your life in tracking easier by diving into his blogs: Simo Ahava’s Blogs.
Segmentstream is a Conversion Modeling Platform that offers a next-generation solution for outdated attribution and conversion tracking. In simpler terms, it’s a semi-automated way to perform attribution modeling.
Given the growing emphasis on privacy-first initiatives in today’s tracking landscape and the increasing amount of non-observable data in the future, SegmentStream’s approach is based on modelling conversions using observable data. This provides marketers with valuable insights into channel attribution.
We also explored an intriguing tool called KNime, which offers a user-friendly gateway to BigQuery data. This tool is especially valuable if you have limited knowledge of SQL. The best part? It’s free and compatible with both Windows and Mac (unlike some alternatives such as Alteryx).
KNime simplifies data transformation, particularly when you’re working with consistent schemas across your data. It can serve as a schema builder and data quality tool. Another neat feature is that it pulls data locally, which means that further data manipulations won’t incur extra costs, unlike direct query changes in the BigQuery console.
Note: KNime is a platform of many uses and capabilities, not just BQ connector – so if you are interested, feel free to check out more about Knime functionalities.
There were also some very cool sponsors such as SnowPlow, Meta, Piano, Observe Point, Analytics Boosters, Conductrics etc.
If all this hasn’t convinced you to attend MeasureCamp, maybe the fact that tickets are free will! Oh, and did we mention that drinks are on the house, and there are some exciting gifts like Lego sets and even a course by Simo Ahava? But seriously, Lego!
The post London’s MeasureCamp: A Metrics Wonderland appeared first on SeekandHit.
]]>The post Unlocking Efficiency and Collaboration with Docs-as-Code appeared first on SeekandHit.
]]>So what is Docs-as-Code exactly? It is a mindset shift that treats documentation as code, applying version control, collaboration, automation, and testing to ensure that documentation remains accurate, up-to-date, and easily accessible.
A mouthful of fancy-sounding words, right? Let’s simplify it down to what is really important to note here and create an example together.
The very simple idea behind this concept is to create, write and update your documentation the same way you create, write and update your code. This is achieved by keeping your documentation in version control systems, alongside your codebase. It is often stored in formats like Markdown or reStructuredText to be able to render it nicely with less effort. Having your documentation files in the code repository means that the changes to documentation can be tracked, reviewed and merged the same way as code changes.
So, you’ve got this Docs-as-Code idea in your head, but what does it look like in the real world? Let’s get down to brass tacks with a few examples you’re probably familiar with:
1. README.md:
Docs-as-Code “hidden” in plain sight. You likely use the README.md file in your repository. It can range from simple instructions on how to get your project up and running – but it can also dive into technical details if needed.
2. Swagger and API Endpoint Documentation:
API documentation is extremely important. If you’ve ever used Swagger or a similar tool to document your API endpoints, you’re already embracing Docs-as-Code. Keeping that documentation current and documenting the changes is like having an up-to-date map of your API for your colleagues.
Chances are, you’ve encountered these Docs-as-Code practices, or perhaps you’re already using them without maybe even realising it.
One of the standout advantages of Docs-as-Code lies in its ability to enhance collaboration within development teams. Centralizing documentation in a single accessible location streamlines the onboarding process for new team members and provides valuable insights for “future cases of amnesia”.
Embracing Docs-as-Code means treating the documentation with the same diligence as the code itself. Rather than treating it as a separate entity, it becomes an integral part of the development process, sharing the same tools and workflows – hence the phrase “Keep the docs close to the code“. The tight-knit relationship of documentation and code in the Docs-as-Code approach makes you more encouraged to work on the documentation, leading to more up-to-date docs.
At the core of this integration is Git, a well-established version control system. Git provides a robust framework for tracking changes, managing collaboration, and maintaining a clear history of your documentation. When you combine an examination of the documentation with a review of the code itself, the likelihood of making errors is significantly reduced. Following this process enables your team members to proofread and suggest changes, as there may be some typos and parts you accidentally missed. This often leads to discussions that result in a unanimous decision on the best course of action.
Below, you’ll find concrete examples from our merge request comments:
Now, after delving into the world of Docs-as-Code and its advantages, traditional methods might start to lose their sheen, right? In the past, tools like word processors, Google Docs, PDFs, and Confluence were the go-to choices. Yet, they often painted a picture of static, unwieldy documents, resistant to change, and not exactly best buddies with modern development workflows. For us, the Docs-as-code approach helped us to avoid the common pitfall of forgetting to document essential information since it is something that should be an integral part of your development process, not an afterthought.If you’re still uncertain about the benefits of this approach, allow us to share a scenario that motivated us to enhance our documentation practices. We were working on a feature when other priorities suddenly demanded our attention, forcing us to temporarily halt its development. Fast forward a year, and we resumed work on the same feature, only to find ourselves spending time reacquainting ourselves with the code. This situation could have been easily prevented if we had kept thorough documentation, significantly reducing the time spent on recalling the details. That’s why we embraced this practice in our day-to-day workflow.
So, if you’ve found yourself intrigued by Docs-as-Code and are ready to jump in, let’s explore what steps you should take to get started. We have also created a step-by-step guide for you on our GitHub in case you want to try it out later.
If you’ve got a public repository, the easiest and quickest way to create a Docs-as-Code environment is by using your GitHub repository’s documentation feature to document your code. Inside your repository, you can create links to different markdown files, creating a well-organized documentation system.
But in many cases, you might not have a public repository, or you might want a more advanced documentation setup on a web page. Luckily, with today’s tools, this isn’t too complicated (and you can try it out for free).
Required components to get started:
Finally, after choosing your set of tools and setting everything up you should have a simple system that looks like this:
Once you choose your set of tools for this job it is time to configure the documentation structure. The organization of the documentation is essential for clarity and accessibility. It provides a roadmap for users and developers, making it easy to find and understand information. A well-structured documentation system encourages and eases collaboration which is the main aim of documentation.
As mentioned earlier, we created a short and simple demo for you on our GitHub. You can follow the exact steps in the README.md file to replicate the minimal project on your side.
One of the most common challenges encountered when transitioning from traditional documentation to a modern Docs-as-Code approach is resistance to change. This resistance often comes from individuals within your organization who are accustomed to the old ways of documenting.
We know it can be hard to get going and change the process you are familiar with, but you have to think of the long-term benefits you will gain by switching to this style of working. We won’t lie – it is still not our favourite job to do.
But, once we adopted this approach, we witnessed lots of improvements. Our documentation writing became more consistent and it simplified the onboarding process for new team members. It also led to fewer inquiries from our QA and other teams about specific feature functionalities. Perhaps most importantly, we decreased moments like this when revisiting sections of code we had written:
Now, let us share some guidelines for how to make the transition to Docs-as-Code smoother so you can enjoy long-lasting benefits from this change:
And there you have it – Docs-as-Code. Always keep in mind that the most challenging part is getting started. We encourage you to take the first step towards the many possibilities of this approach.
Happy coding and documenting!
The post Unlocking Efficiency and Collaboration with Docs-as-Code appeared first on SeekandHit.
]]>The post Be bold beyond borders appeared first on SeekandHit.
]]>Decision makers from export companies with eCommerce and Lead Generation business models as well as SEM agencies from all corners of Central & Eastern Europe gathered in Warsaw, Poland this September 14th. Naturally, we were there as well to take part in the second CEE International Growth Summit, hosted by Google.
International Agency Growth Program (IGAP) is Google’s Agency Platform designed to help Partners and their clients overcome the challenges of expanding internationally.
The Platform is meant to cover three pillars of growth: personal skills development, client’s development, and the agency as-a-whole growth. As part of the Program, SeekandHit has the opportunity to use these resources to both our client’s and our own benefit, with always the same focus in mind – growth.
Regular meetings and other forms of communication with Google’s IGAP team are held all year around, but once a year we like to meet in person to discuss ideas, new discoveries, and tactics for international expansion for the upcoming period. That is exactly what this year’s Summit was about: identifying and using international growth potential, with specific focus on retail and lead generation strategies.
The event was kicked-off by an exclusive session for IGAP agencies only, in which we were given a show-around of the newest tools made available via the platform.
The auditorium was then increased by additional agencies and companies, before we emerged into more detailed lectures and discussions about identifying global opportunities and the most opportune markets, driving profitability, and creating the very best logistics plans for your business.
After a very pleasant lunch and networking break, the event was continued in break-out sessions focusing on retail and lead generation, so each participant had the opportunity to pick the topic of their interest (or, in our case, use the “divide and conquer” strategy to cover both).
To close up the formal section of the Summit, ex-CIA and now a CEO Rupal Patel gave us some tips and tricks on overcoming challenging circumstances to unlock international growth, which will surely come in handy during the upcoming Q4.
Here was where we left the formal halls, stages, and “business-only” talks to finish off the day with a dinner party, which was another great opportunity to get to know each other better and create some new contacts or even partnerships.
When we add up the whole experience together, it’s safe to say that CEE International Growth Summit 2023 was both educational and fun in many ways.
We arrived there ready to learn something new, but wound up getting so much more: great market insights, inspiration and courage to “be bold beyond borders”, together with some new acquaintances we really enjoyed making.
We are already looking forward to the CEE International Growth Summit 2024!
The post Be bold beyond borders appeared first on SeekandHit.
]]>The post Asyncio demystified appeared first on SeekandHit.
]]>Earlier this year we attended the PyCon Italia, where we had opportunities to witness some really interesting topics and talks. One that caught our eye was the talk called The Hitchhiker’s guide to Asyncio by Fabbiani (great fellow ). Due to the surprisingly large number of people who attended the talk, more than half of us had to sit on the floor. Obviously, there was a huge amount of people who either did not understand this topic or sought to understand it better. Through the talk, we were pleasantly surprised with the approach used. Instead of jumping into technical jargon and explaining what is happening in the background, the presenter used analogies to bring the topic closer to the audience. This inspired us to delve deeper into the topic, understand it better ourselves by explaining it to each other through analogies, and finally after completely simplifying it, bring it to you, to hopefully better your understanding of this topic.
Asynchronous programming is a programming paradigm that enables you to write code without blocking the main thread of execution. It might sound complex, but it is pretty much straightforward. This means that your program can take a break or do something else while the asynchronous task is running. Let’s illustrate this concept with a simple example.
Meet Little Tommy Sync. Tommy had been tasked to wash the dishes before he could go outside and play. So, being a good boy, he picks up the dishes and starts washing them one by one. Once he finishes, he can finally go outside and play.
On the other hand, we have Async Bob. Bob realizes he has a dishwasher, so he gathers all the dirty dishes, places them in the dishwasher and turns it on. Now, with some free time on his hands, he relaxes and plays video games for a while. Later, he takes the dishes out of the dishwasher and heads outside to play with Tommy.
Notice how both Tommy and Bob completed their chores at approximately the same time. However, Bob had more time to relax and do other things while the dishwasher was doing the work.
That’s precisely the essence of asynchronous programming — utilizing available resources in the most efficient manner. In the world of programming, we can harness this power to enable other tasks to run and reach completion while we patiently wait for a background task to wrap up its execution.
So how could we translate this to Python?
The Asyncio library in Python provides a powerful framework for asynchronous programming. It allows you to easily write concurrent code that can efficiently handle multiple tasks without blocking the execution flow.
The core concept behind Asyncio is the use of coroutines, which are functions that can be paused and resumed, allowing other tasks to be executed in the meantime. And all of those tasks are orchestrated by an event loop. Together they make the basic building blocks of the Asyncio library. Let’s again use an analogy to simplify this concept.
Imagine a fantastic orchestra where the event loop plays the part of the conductor, just like Tommy guiding the show. In this musical world, Bob and Ricardo, the coroutines, step into the spotlight as skilled musicians. They effortlessly perform their parts, pausing, switching and resuming with ease, adding a delightful rhythm to the performance. Tommy, with his expertise, directs the event loop, flawlessly coordinating and executing tasks, creating a mesmerizing and well-orchestrated spectacle that enchants the audience.
Jumping back to the real world now, let’s take a look at a diagram that demonstrates how this works in Asyncio.
It might look daunting at first, but we will simplify this a lot pretty quickly.
The event loop has a queue in which it accepts incoming tasks. The event loop pulls the tasks out of the queue and runs them.
Let’s take a look at the lifecycle of Task A:
This example isolates the execution of task A, but this is the process that essentially happens for each task, round and around until all tasks are complete (depending on how the loop was run). With this understanding, let’s now check out how this looks in code and try to understand its results:
As you can see both tasks, Stippy and Ozzy, are completed until their first pause point, after which there is a little wait before Ozzy arrives to Split and Stippy a bit later in Zadar (even though we sent out Stippy first).
For more examples check out our Github repo: https://github.com/seekandhit/asyncIO
Asyncio excels in handling asynchronous I/O operations by allowing concurrent execution of tasks, particularly those waiting for I/O completion. It proves valuable in optimizing the performance of I/O-bound applications.
Here are some scenarios where Asyncio becomes advantageous:
Keep in mind that Asyncio may not always be the ideal choice. You need to research your project and weigh the negative and positive aspects before arriving at a decision.
Once you do decide to go with Asyncio, be sure you have a solid grasp of asynchronous programming to avoid a lot of unnecessary headaches. Even when you are aware of the workings of Asyncio, there are still those annoying and common mistakes that will happen sooner or later. They can range from exiting the application too soon (don’t forget to await ) to getting your tasks swept up by a garbage collector. This is unavoidable, bugs will happen and they will haunt us for days. The best you can do is to read up and understand the mechanics behind it. If you need help or a reminder of how some of it works, be sure to read our article again.
The post Asyncio demystified appeared first on SeekandHit.
]]>The post From Military officer to Google Project Management certificate owner appeared first on SeekandHit.
]]>As you can guess, it was (still is) a hard switch from an extremely strict system, where there is no place for mistakes (especially because I used to work with explosives), into a (for me) new world, where a lot of stuff is tolerated. Especially when you are on a junior position like me, and when your team lead says: “It’s ok for a junior to make mistakes” (and I made A LOT in the beginning).
So for a newbie like me, the best thing to help me stop making a lot of mistakes, was a Google project management course.
When I signed in to the course, I remember I told my lead that this course is too easy, and that I will finish it in two weeks. Little did I know that it was just the first of six courses, and it will take six months for me to completely finish the course.
For those of you reading this, I will write down an abstract of those six courses, so maybe you will take a chance to finish them like I did.
This course was the first in a series of six to equip me with the skills I’ll need to apply to introductory-level roles in project management.
When I started to discover foundational project management terminology, and gain a deeper understanding of the role, and responsibilities of a project manager, I realized that I already did some of that stuff in the military (just in a bit harder version).
Throughout the program, there were videos from Google employees. They talked about their everyday tasks, how they gain experience, and how to implement learned lessons in projects.
After completing this course, I learned:
I was still a rookie in my company, and didn’t have much obligation. So I was going through the second course like there is no tomorrow.
I need to mention here that I took notes from every course on my iPad. I like to write down everything I’m learning (maybe because my memory isn’t my best side ).
I started to learn how to set a project up for success in the first phase of the project life cycle: the project initiation phase. Also, I learned how to define and manage project goals, deliverables, scope, and success criteria.
In this course, I discovered how to use tools and templates like stakeholder analysis grids and project charters to help me set project expectations and communicate roles and responsibilities.
After having some tools and templates in place, my Team lead and I started to implement those into our everyday process.
After completing this course, I learned to:
Here my confidence started to grow a bit. Also, I started to organize my team’s everyday tasks on our board, attend meetings… My life as project manager started to look like the real thing.
But, the main occupation in my business life was the Google Project Management course.
I was rolling with the second phase of project life: the project planning phase.
I learned how to examine the key components of a project plan, how to make accurate time estimates, and how to set milestones. Later on, I discovered tools that helped me to identify and manage different types of risk and how to use a risk management plan to communicate and resolve risks.
Last, but not least, from Google’s employees I learned how to draft and manage a communication plan and how to organize project documentation.
More and more I was getting into the world of project management, and if I had to be honest, I enjoyed it.
After completing this course, I learned:
Even though I’ve learned a lot from these three modules, I couldn’t even imagine what the next three modules would bring to me.
You will find out how the rest of my Coursera PM education went in another blog on this topic.
The post From Military officer to Google Project Management certificate owner appeared first on SeekandHit.
]]>The post Batching through BigQuery data from Python appeared first on SeekandHit.
]]>A number of teams at SeekandHit often work with BigQuery. It is basically the go to table for ingesting data that is used by different analytics teams, regardless of whether the data is generated by our systems, or collected from third-party APIs. The team that I am part of uses BigQuery for storing data related to marketing campaigns, SEO, and mobile app statistics. We also use it to store some data that is generated by other teams.
We can categorize our own usage of BigQuery into the following categories:
This data is later used for a wide range of applications – from generating reports, building models, determining what to market, and driving other services. In this post we’ll take a look at how to query the data from BigQuery.
Some of our projects use data ingested by other teams or ourselves. Multiple services that we have use BigQuery data to generate various types of files. However, the problem is that our services usually have multiple data pipelines running concurrently. And, of course, we have a limited amount of RAM. Since we don’t know how many rows the source table will contain at every given moment, sometimes memory spikes occur. E.g.
The image above shows a memory spike that occurred in one of our services on April 16th between 2:00 AM and 2:26 AM.
This particular service runs some 30+ queries daily, each in an instance of a data pipeline. Since the service runs these pipelines daily, the first thing we did was to make sure that the pipelines are scheduled in a way that their runs don’t overlap. Meaning that we scheduled each pipeline to start at a time we are fairly certain other pipelines are not running.
Of course, this doesn’t eliminate the problem of loading large amounts of data into a single pipeline run. Therefore, we need to look at how to load data from BigQuery in batches (or pages in BigQuery terminology).
So, let’s take a look at how we can paginate through the data.
I would like to touch on the example provided in the docs on this page [1].
The “problem” with this example is that you need to take care to remember the next page token. The page token tells the service where you left off and from which page you want to continue. This is good if you want to skip some pages or simply have more control. However, in our use case, we want to load all of the data.
Let’s take a look at how we do it and what happens under the hood.
Google offers a number of datasets that are available to the general public. One of these is the google_trends dataset that I chose for this example. A dataset is a collection of tables. We’ll use the international_top_rising_terms table for this example. I chose it because it:
For this example, I’m gonna use Python 3.11.3 along with the google-cloud-bigquery package version 3.11.0. The example assumes that you have an environment variable GOOGLE_BIGQUERY_CREDENTIALS that contains the credentials encoded as a base64 string.
Let’s start by defining a function for creating our credentials.
I’ve placed this function into the auth.py module.
Let’s go through the steps of the example, the full code is available at the end of the post.
First, we import the required packages, including the function we previously defined:
Nothing special going on here. We start by implementing a function that will run the query:
The function above accepts the query we want to execute as a string. It then declares a variable for our result.
In the next step we create a BigQuery client using the function we previously wrote (get_bq_client). Using the bigquery.Client instance, we can call the query method and pass the query argument to it. This method returns a QueryJob instance.
Calling the result method on the job instance starts the job and waits for the result. The query job result will be an iterator instance. We print out the total number of rows that the table has.
If any exceptions occur, we print out a message. Finally, we return the query job result.
I also decided to wrap this function into a get_data function that will return a generator. Each element returned by the generator will be a page (or batch). Here is the implementation:
The function first calls the previous function and checks whether the result is None. If it’s None, just return an empty list. Otherwise, we iterate through the pages (batches) and return them one by one.
Now, we only need to write our BigQuery SQL query, call the get_data function and iterate over the generator of pages (batches).
We’re querying the data from the public dataset’s table for June 7th. If we run the script now, we’ll get the following output (assuming everything goes well):
Now… it would have taken some 178 iterations to get all of the pages, given the current page size, but you get the idea .
The page size (in this case) is a “sensible value set by the API”. When working on one of our production tables, we got back 175 000 rows per page. However, there is a maximum size that the response object can have (see [1]). Despite trying, I wasn’t able to get a response larger than 35 000 rows in this example.
Ok, so, that was our basic example, but what happens under the hood? Are we sure that the data is loaded from BigQuery in batches? Let’s take a look…
If you’re not interested in what happens under the hood… feel free to skip to the conclusion.
If you take a look at the link provided under [1], you’ll notice this:
“Each query writes to a destination table. If no destination table is provided, the BigQuery API automatically populates the destination table property with a reference to a temporary anonymous table.”
That means that each query that we run through the BigQuery client will create a temporary table for us. The reference to the destination table will be available in the destination property of the QueryJob instance. We get the job instance from calling the bigquery.Client.query method.
Ok, so, when we page through the results, the data is read from this table. I already mentioned that calling the method result on the bigquery.job.QueryJob instance returns an iterator. This iterator is actually an instance of bigquery.table.RowIterator.
The bigquery.table.RowIterator class inherits the page_iterator.HttpIterator class defined in the google.api_core package. Here is the inheritance diagram for these classes (only the fields and methods relevant to the discussion are included):
When we call the pages property, it is defined in the page_iterator.Iterator class. Meaning that this property is inherited. Calling the property will raise an exception if an iteration has already started. Otherwise, it will return the result of calling the _page_iter method. Notice that this is a private method.
In turn, the _page_iter method will yield a page returned by calling the _next_page method (this means it returns a generator). Let’s take a look:
The _next_page method is abstract. Therefore the classes that inherits the page_iterator.Iterator class must implement it.
The page_iterator.HttpIterator class implements this method. Here is it’s implementation:
As you can see, the method internally calls the _get_next_page_response method, extracts the obtained items (rows) and wraps them inside the Page class. It also gets the next page token from the response. The Page class represents a single page of results in an iterator. You can actually get the reference to the actual response from the page instance using page.raw_page.
The self._page_start is assigned to a function provided as a constructor argument. This function is used to do something after a new Page instance is created. I won’t go into much details about it, but the RowIterator provides a function called _rows_page_start which gets the total number of rows from the response and the columns schema from the rows that are returned as part of the response.
The RowIterator actually overrides the _get_next_page_response method defined in the HTTPIterator. Let’s look at the override implementation:
The most important point here is that the method makes API requests to Google. The query parameters that it constructs contain the next page token obtained from the previous response. Here is the relevant part of the method (defined in HTTPIterator):
So, in essence, the pages property returns a generator. When we loop through the generator, the HTTPIterator makes an API request for each new page that needs to be generated. The RowIterator provides the function that obtains the total number of rows from the response and table schema.
In this post, we took a look at how to query a BigQuery table from Python using the client library google-cloud-bigquery.
We also took a look under the hood of how this happens. For each query we execute, BigQuery creates a table. When paging through the result, the next page token is used to determine the next page for iteration and an API request is sent to fetch the page. We can use the RowIterator, in which case this is done for us, or we can do it manually.
If you want to have full control over paging through the results of a query, then the example provided in [1] is a good starting point. You can also take a look at the _next_page, _get_next_page_response, and _get_query_params methods implementations.
The post Batching through BigQuery data from Python appeared first on SeekandHit.
]]>