Monthly Survey of Food Services and Drinking Places: CVs for Total Sales by Geography - July 2020

CVs for Total sales by Geography
Table summary
This table displays the results of CVs for Total sales by Geography. The information is grouped by Geography (appearing as row headers), Month and percentage (appearing as column headers).
Geography Month
201907 201908 201909 201910 201911 201912 202001 202002 202003 202004 202005 202006 202007
percentage
Canada 0.69 0.57 0.59 0.56 0.58 0.61 0.67 0.59 0.63 1.22 1.29 1.13 1.21
Newfoundland and Labrador 2.87 2.49 3.13 3.19 2.77 3.06 2.94 3.17 3.10 4.99 4.02 3.97 5.25
Prince Edward Island 6.84 4.93 4.01 4.53 4.75 4.16 3.67 3.40 2.84 2.54 2.84 3.35 4.18
Nova Scotia 4.65 4.62 2.76 2.94 3.45 3.56 2.06 2.95 2.93 5.03 5.04 3.97 4.09
New Brunswick 2.28 1.30 1.56 1.87 1.45 1.40 1.35 2.16 2.47 4.36 4.44 3.89 3.43
Quebec 1.97 1.41 1.32 1.26 1.37 1.22 1.37 1.17 1.38 3.74 3.47 2.69 2.86
Ontario 1.11 0.94 1.04 0.96 0.99 1.02 1.05 0.97 1.03 1.97 2.14 1.89 1.99
Manitoba 2.43 2.74 2.18 2.42 1.95 2.00 1.92 1.80 2.18 4.91 4.17 3.73 5.00
Saskatchewan 1.92 1.92 1.58 1.59 1.79 1.56 1.51 1.68 1.98 3.68 3.32 2.66 3.19
Alberta 1.32 1.24 1.18 1.23 1.29 1.33 1.37 1.29 1.76 3.07 3.41 3.11 2.61
British Columbia 1.69 1.57 1.60 1.65 1.62 1.96 2.45 1.98 1.89 3.18 3.45 3.18 3.81
Yukon Territory 5.95 4.95 5.88 7.06 6.05 6.69 7.22 5.05 4.97 5.09 5.95 6.91 4.08
Northwest Territories 1.00 0.91 1.00 1.46 1.59 0.88 0.98 0.80 0.85 2.33 2.10 1.46 2.39
Nunavut 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Analysis 101, part 4: Case study

Catalogue number: 892000062020012

Release date: September 23, 2020

In this video, we will review the steps of the analytical process.

You will obtain a better understanding of how analysts apply each step of the analytical process by walking through an example. The example that we will discuss is a project that examined the relationship between walkability in neighbourhoods, meaning how well they support physical activity, and actual physical activity for Canadians

Data journey step
Analyze, model
Data competency
Data analysis
Audience
Basic
Suggested prerequisites
Length
9:01
Cost
Free

Watch the video

Analysis 101, part 4: Case study - Transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Analysis 101, part 4: Case study")

Analysis 101: part 4 - Case Study

Hi, welcome to our analysis 101 case study. Before you watch this video, make sure you've watched videos 123 so that you're familiar with the three stages of the analytical process.

Learning goals

In this video we will review the steps of the analytical process and you will obtain a better understanding of how analysts apply each step of the analytical process. By walking through an example. The example that we will discuss is a project that examined the relationship between walkability in neighborhoods, meaning how well they support physical activity. An actual physical activity of Canadians.

Steps in the analytical process

(Diagram of 6 images representing the steps involved in the analyze phase of the data journey where the first steps represent the making of an analytical plan, the middle steps represent the implementation of said plan and the final steps are the sharing of your findings.)

Throughout this video will refer back to the six steps of the analytical process and illustrate these steps through our walkability example.

What do we already know?

For analytical plan, let's start by understanding the broader context. What do we already know about the topic? Well, we already know that obesity is a problem in Canada. Insights from the Canadian health measures survey show that 29% of Canadian children and youth are overweight or obese while 60% of Canadian adults are overweight or obese. We also know that many Canadian adults and children are not active enough data from the Canadian health measures survey show that 33% of Canadian children and youth are meeting the physical activity guidelines, meaning that about 66% do not meet requirements. Likewise, 18% of Canadian adults are meeting the physical activity guidelines.

(Texte: "Without being aware of it, our neighbourhoods and how they are built influence how healthy we are.")

These challenges have led to increased attention around the idea of changing the environment in which we live to help Canadians make healthier lifestyle choices. This idea was the focus of the 2017 chief public health officers report on the state of public health in Canada, which noted that shifting behaviors is challenging. What would help Canadians become more active? More parks, better walking paths, or safer streets? Should policy makers look at crime rates? The list is endless.

What do we already know? Environments shape our health

There are a number of ways that our environment can influence our health behaviors. For example, our built environment such as how walkable our neighborhood is, or our health behaviors like how long we commute, or how many sports we participate in, can have an impact on our mental and physical health. Think about your own neighborhood. Does the design of your neighborhood make it easy or hard for you to walk to and from places or to get outside to exercise or play with your kids?

What do we already know? Knowledge gaps

Now that we understand the broader topic, let's identify the knowledge gaps. Previous studies had already demonstrated that Canadian adults living in more walkable neighborhoods are more active. However, recent findings focused on a few Canadian cities and did not provide national estimates. Likewise, previous work focus on how to get adults more active, but was limited in the analysis for children.

What is the analytical question?

Identifying a relevant analytical question is important to defining the scope of your work. For this study, the main question was does the relationship between walkability and physical activity in Canada differ by age? That's a clear, well defined question, and it's written in plain language.

Prepare and check your data

(Texte: Canadian Active Living Environments Database)

Now it's time to implement our plan. The first step is preparing and checking our data. Given that we had access to a new Canadian walkability data set, we wanted to leverage this new data source before we go any further. Let me give you some more context on walkability. Essentially, walkability means how well in neighborhoods supports physical activity. Walkability is higher in Denser neighborhoods, such as those with more people living on one block. It's also higher in neighborhoods with more amenities, like access to transit, grocery stores or schools or neighborhoods with well connected streets. Each neighborhood was assigned to walkability score from one to five. If you live in a suburban area outside the city core, your neighborhood will likely have a walkability score of three. Downtown neighborhoods will likely have a score of four or five.

Perform the analysis

(Texte: Canadian Active Living Environments; Canadian Community Health Survey (ages : 12 +years); Canadian Health Measures Survey (Ages: 3 to 79 years))

For our analysis, we linked external walkability data to two major Statistics Canada health surveys. We made use of both surveys because they use different measurements for physical activity. One survey asked respondents to self report their daily exercise while the other made use of accelerometers. Accelerometers capture minute by minute movement data. Think of it as a fancy pedometer.

Summarize and interpret your results

After some data cleaning concept, defining and lots of documenting our analytical decisions, we then started crafting a story based on our findings are main finding was that adults in more walkable neighborhoods are more active. However, different patterns were observed for children and youth. Their physical activity was pretty consistent across different levels of neighborhood walkability. When we started this work, there was a lot of evidence linking physical activity and neighborhood walkability in adults. But only a few studies examining children. Some studies found that children were more physically active in more walkable neighborhoods, while others stated the opposite. Finding we performed age specific analysis to examine this in greater detail and found that children under 12 are more active in neighborhoods with low walkability, like car oriented suburbs, which may have larger backyards, schoolyards, and parks where they can run around and play safely. But the relationship for children 12 and over with similar to that of. Adults, they were more physically active in higher walkability neighborhoods. Summarizing your results in simple terms is key to getting your message across to various audiences. As you learned in previous videos, translating complex analysis into a cohesive story is important. It's your job to digest the information and guide your reader through your story line.

Summarize and interpret your results: So what?

Interpreting the results also involves helping your audience understand the. So what factor for us. This meant highlighting that walkability is a relevant concept for adults, but we need to think differently about how to support physical activity in children. For example, what about parks, neighborhood safety, and crime rates? Explain to your reader how your findings fit within the existing body of literature. It's also a great practice to communicate what needs to be done going forward to advance our knowledge and flag any limitations to the study.

Disseminate your work

This project led to some very interesting analysis which we share it in different ways with stakeholders, policy makers and Canadians. Two major research papers were published for armor expert audience. While we also created an infographic on key points for a more general audience.

Summarize and interpret your results: So what?

(Diagram of 6 images representing the steps involved in the analyze phase of the data journey where the first steps represent the making of an analytical plan, the middle steps represent the implementation of said plan and the final steps are the sharing of your findings.)

The analytical process is a journey. It often takes much longer than you anticipate. First understand your topic and take your time to develop a clear and relevant analytical question. Make sure to check and review your data throughout the process and strive to translate your findings into a meaningful and interesting narrative. That way people will remember your work.

(The Canada Wordmark appears.)

What did you think?

Please give us feedback so we can better provide content that suits our users' needs.

Analysis 101, part 3: Sharing your findings

Catalogue number: 892000062020011

Release date: September 23, 2020

In this video, you will learn how to summarize and interpret your data and share your findings. The key elements to communicating your findings are as follows:

  • select your essential findings,
  • summarize and interpret the results,
  • organize and assess your reviews and
  • prepare for dissemination
Data journey step
Analyze, model
Data competency
Data analysis
Audience
Basic
Suggested prerequisites
Length
11:38
Cost
Free

Watch the video

Analysis 101, part 3: Sharing your findings - Transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Analysis 101, part 3: Sharing your findings")

Analysis 101: Part 3 - Sharing your findings

Hi, welcome to analysis 101 video 3. Now that we've learned how to plan an analytical project and perform, the analysis will discuss best practices for interpreting an sharing your findings.

Learning goals

In this video you will learn how to summarize an interpret your data and share your findings. The key elements to communicating your findings are as follows. Select your essential findings. Summarize an interpret the results. Organize an assess reviews. And prepare for dissemination.

Steps in the analytical process

(Diagram of 6 images representing the steps involved in the analyze phase of the data journey where the first steps represent the making of an analytical plan, the middle steps represent the implementation of said plan and the final steps are the sharing of your findings.)

Going back to our six analytical steps will focus on sharing our findings. If you've been watching the data literacy videos by Statistics Canada, you'll recognize that this work is part of the third step, which is the analyze phase of the data journey.

Step 5: Summarize and interpret your results

Let's start by discussing how to summarize an interpret your results.

Tell the story of your process

(Image of the 4 parts to for the 5th step: Context - Evidence from other countries or anecdotal; Methods - Comapre millennials (aged 25-34) to previous generations; Findings - Millennials have higher net worth and higher debt then Gen-X; Interpretation - Mortgages main contributor to debt for millennials.)

Presenting your findings clearly to others is one of the most challenging aspects of the analytical process. Let's use the millenial paper as an example. First we started with the context where we highlighted previous findings for American millennials, which motivated our study on Canadian millennials. Then we discussed our data and methodology defining millennials in explaining how we compared them with previous generations. Then we walked through the key findings. The storyline, for example, we explained that well, Millennials had higher net worth than generation X when they were younger. Millennials were also more indebted. Finally, we interpreted our findings, digging deeper into the Y. For millennials, we found that mortgage debt, which reflects higher housing values, contributed to their higher debt load.

Carefully select findings that are essential to your story

You'll likely produce several data tables or estimates throughout your analytical journey. Carefully select the findings that are essential to telling your story. Revisit your analytical questions an select visuals that clearly help to answer these questions. Remember that your results are not the story, but the evidence that supports your story.

Summarize your findings and present a logical storyline

Once you've selected the key results, summarize your findings and present them according to a logical storyline. Identify the key messages. Often these messages will serve as subheadings in a report or study. Also, always make sure to discuss your findings within the broader context of the topic. You've done great work and you want people to remember what your analysis contributes to the literature. Creating a clear storyline will ensure that people remember your work.

Define concepts

(Text on screen: A millennial is anyone in our dataset between 25 to 34 years old in 2016)

As you may recall from video to project specific definitions of key concepts may have been established before starting your analysis. It's worthwhile to include any relevant definitions in your written analysis, like our definition of Amillennial. This will help the audience better understand your findings.

Avoid jargon and explain abbreviations

In your written analysis, avoid jargon. An explain abbreviations clearly. For example, instead of using a statistical term such as synthetic birth cohort, explain your results in plain language. Define any acronyms that you use, like CSD, which stands for senses subdivision at the earliest possible opportunity.

Maintain neutrality

(Text on screen: Subjective - Large/small, High/Low, Only/A lot; Neutral - Rose or fell by X%, Higher or lower by X times.)

Ensure that you're maintaining neutrality by using plain language and not overstating your results, or speculating when interpreting them. Avoid qualifiers like large, high, or only, which can be subjective and focus on explaining things using neutral language.

(Text on screen: Subjective - Large/small, High/Low, Only/A lot; Neutral - Rose or fell by X%, Higher or lower by X times.)

Here are some examples that were not neutral and were improved by letting the data tell the story. Instead of employment growth plummeted down by 2%. You can say over the previous quarter employment fell 2%. The largest decline in the past two years. The second statement maintains neutrality. Instead of Millennials are dealing with a significantly worse housing market and have a lot more debt, you can say median mortgage debt from Millennials age 30 to 34 reached over 2.5 times their median after tax income. Don't rely on exaggerations to make your point stay neutral. These statements are robust and supported by the data.

Expect to make mistakes

Expect that you will make mistakes. It's a normal part of analytical work. Remember that you're the person most familiar with your project, which puts you in an ideal position to identify mistakes. When you complete your preliminary draft, leave it alone for a few days and review it with fresh eyes. Don't be afraid to ask others for help in correcting your errors, and remember that learning from your mistakes will strengthen your analytical skills.

Step 6: Disseminate your work

Next, we're going to review the last step. Which is how to prepare your work for dissemination and communicate your finding successfully.

Ask others to review your work

An important part of preparing your work for dissemination is asking others to review your work. You can request feedback from a range of people such as colleagues, managers, subject matter experts and data or methodology experts.

Seek feedback on different aspects of your work

Ask your reviewers for feedback on different aspects of your work, such as the clarity of your analytical objectives, appropriateness of the data you've used, definition of concepts, review of literature, methodological approach, interpretation of your results and clarity and neutrality of your writing.

Organize and assess reviewers' comments

After receiving comments from your reviewers, organize and assess their feedback. Look for any concerns that are common across reviewers comments and determine which concerns will require additional analysis. Make sure to clarify anything that reviewers struggled to understand.

Document how you addressed reviewers' comments

Document how you've addressed each of the reviewers comments. If you're not able to address certain concerns, it's important to justify why. In some cases, your organization may require that you provide a formal response to reviewers comments. However, even if this is not required, it is a best practice to make note of the decisions you make when revising your work.

Preparing your work for publication involves many people and processes

Typically many processes and many people are involved in helping to prepare your analytical product for dissemination. At Statistics Canada, analytical products undergo editing, formatting, translation, Accessibility, assessment approval processes, and the preparation of a press release. You will want to consider their requirements for your work, whether it's a briefing note, an infographic or information on your organization's website.

How your work is published depends on your intended audience

How you work is disseminated will depend on your intended audience. You need to think about who the intended audience is. What do they already know? And what do they need to know for example the general public will want high level key messages while the media or policy analyst community will want more information visuals and charts. Researchers, academics, or experts will want details about your data, methodology and limitations of your work.

How your work is published depends on your intended audience: Media and the general public

For example, we often provide highlights visually through charts and infographics when communicating findings to the general public. For a study on the economic well being of millennials, the findings were communicated through Twitter, an infographic and a press release which summarized the key messages of the analysis.

How your work is published depends on your intended audience: Policy-makers

Other audiences such as policy makers may be interested in more detailed findings or a different venue where they can have their questions answered quickly. Results from the millenial study were shared with analysts and policy makers through a web and R the publication of a study with detailed results and other presentations.

How your work is published depends on your intended audience: Researchers, academics, experts

Findings are shared with researchers, academics or experts by publishing the analysis in detailed research papers or Journal articles in peer reviewed publications, as well as by presenting at conferences. This audience will be more invested in the specific details of. Work and knowing where the findings fit into the larger research field and knowledge base.

Communicating your work to the media requires preparation

Lastly, preparation is essential to successfully communicate your work to the media. Check to see if your organization offers media training. Prior to sharing your findings with the media, devote time to summarizing your main results and determining your key messages. Think about how to communicate your findings in simple terms. Anticipate potential questions and create a mock question and answer document.

Summary of key points

And that's a quick description of how to review and disseminate your work. First, tell the story of your process. Second, interpret your findings using clear an neutral language. 3rd, ask others to review your work and forth. Preparation is key to communicating your findings. Remember to always stay true to your analytical question while telling a clear story. Next, take a look at our case study, where we provide an example of the analytical process through the lens of a study about neighborhood walkability and physical activity.

(The Canada Wordmark appears.)

What did you think?

Please give us feedback so we can better provide content that suits our users' needs.

Analysis 101, part 2: Implementing the analytical plan

Catalogue number: 892000062020010

Release date: September 23, 2020

By the end of this video, you will learn about the basic concepts of the analytical process:

  • the guiding principles of analysis,
  • the steps of the analytical process and
  • planning your analysis.
Data journey step
Analyze, model
Data competency
Data analysis
Audience
Basic
Suggested prerequisites
Analysis 101, part 1: Making an analytical plan
Length
6:11
Cost
Free

Watch the video

Analysis 101, part 2: Implementing the analytical plan - Transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Analysis 101, part 2: Implementing the analytical plan")

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Analysis 101, part 2")

Implementating the analytical plan (Analysis 101: Part 2)

Hi, welcome to analysis 101 video 2. Make sure you've watched video one before you start because we're diving right back in. Now that we've learned how to plan, an analytical project will discuss best practices for implementing your plan.

Learning goals

In this video you will learn how to implement your analytical plan. The key steps in implementing your plan include preparing and checking your data. Performing your analysis. And documenting your analytical decisions.

Steps in the analytical process

(Diagram of 6 images representing the steps involved in the analyze phase of the data journey where the first steps represent the making of an analytical plan, the middle steps represent the implementation of said plan and the final steps are the sharing of your findings.)

In the first video we went through how to plan your analysis. In this video will go through how to implement your plan. If you've been watching the data literacy videos by Statistics Canada, you'll recognize that this work is part of the third step, which is the analyze phase of the data journey.

Step 3: Prepare and check your data

The first step in implementing your plan is to prepare and check your data. Preparing and checking your data will make your analysis more straightforward and rigorous.

Define your concepts

Start by defining your concepts in our previous example that examined the economic status of millennials, we needed to determine how we would define millennials. In the literature, we found no official definition for that generation, but many different recommendations. It's important to make an analytical decision that's meaningful and defendable, and to apply it consistently and documents your decision. In this paper, Millennials were defined as those age 25 to 34 in 2016 in age group that aligns with our typical definition for young workers.

Clean up the variables and the dataset

Now that the concepts are clear, will start digging into the data. Start by cleaning and preparing your data set. You'll want to rename the variables so that they are meaningful an formatted in a consistent manner. For example, rather than using the name Var 3, which is confusing, we rename the variable highest degree earned, which is much clearer. The effort you invest at this step will serve to make your life easier as you proceed with your analysis, especially if you document your decisiones well.

Check your data

(Table of presenting the economic well-being research by generation where the left column represents the generational groups. The middle columns and right column represent the average age in 1999 (Gen-Xers = 26 years old & Millennials = 14 years old) and 2016 (Gen-Xers = 43 years old & Millennials = 66 years old), respectively.)

At this stage, check your data to ensure that it's of the highest quality. For our example, we should check the average age by generational group to make sure there is no issue with how age is calculated. The average age for Generation X is 26 years old in 1999 and in 2016 their average age is 43. This makes sense, however. Well Millennials are 14 years old on average in 1999. They are 66 on average in 2016. In this case we should check our program code, examine the day to fix the error, and document why this error occurred.

Data checks throughout your analysis

To add rigor to your analysis, there are data checks that you should perform at different stages. In the early stages you can check the raw data to ensure that it's clean and ready for analysis. You can also check the frequency distributions of the variables to ensure that the data are consistent with past datasets. Then as you are checking the results of your analysis, you can verify whether your findings are consistent with the literature. All of this work should be done in well documented code that is saved for future reference.

Step 4: Perform the analysis

The second step in implementing your plan is to perform the analysis. As discussed in video one, your analysis should be planned out when creating your analytical plan. So once your data are clean and prepared, you're ready to perform the analysis.

Implementing your plan

Performing the analysis should be straightforward. If you created a clear analytical plan and cleaned and prepared your data appropriately. You should conduct your analysis as planned and as discussed previously, check your results as you go to ensure that the data and methods you are using are producing valid results. Another benefit of checking your results as you go is that you can flag unexpected findings.

Be flexible

If you have unexpected results, this may be due to an error in the data, or it might be some unexpected research finding. Be flexible and adjust your analytical plan to further investigate results that are not in line with your expectations or do not match up with theory. We will see an example of this in the case study video where additional analysis was necessary to disentangle a complex relationship.

Summary of key points

And that is a quick overview of how to implement your analytical plan. This involves preparing and checking your data. And then performing the analysis. Throughout this work, make sure to document your decisions. In the next video you'll be learning about interpreting and sharing your work.

(The Canada Wordmark appears.)

What did you think?

Please give us feedback so we can better provide content that suits our users' needs.

Video - Geoprocessing Tools (Part 1)

Catalogue number: Catalogue number: 89200005

Issue number: 2020017

Release date: November 24, 2020

QGIS Demo 17

Geoprocessing Tools (Part 1) - Video transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Geoprocessing Tools (Part 1)")

So today we'll introduce geoprocessing tools, which enable layers to be spatially overlaid and integrated in a variety of ways. These tools epitomize the power of GIS and geospatial analysis, facilitating combining feature geometries and attributes, whether it be assessing spatial relations, distributions or proximities between layers and associated variables of interest. We'll demonstrate these tools with a simple case-study, examining land-cover conditions near water features, also known as riparian areas, in southern Manitoba. These tools can be reapplied and iterated with multiple layers, enabling you to combine, analyse and visualize spatial relations between any variables, geometries and layers of thematic relevance to your area of expertise.

So first, the Merged Census Division feature from the AOI layer was selected and subset to a new layer – CAOI – since Selected Features is not available when running tools as a batch process.

In addition to the interactive and attribute selection tools covered previously, there is one final type – Select by Location. This selects features from the input layer according to its spatial distribution relative to a second layer and the selected geometric predicates. The predicates define the particular spatial relations used when selecting features. We'll use Intersects, Overlaps and Are Within. Multiple predicates can be used, provided they do not conflict. And processing times increase with the number of selected predicates. At the bottom, the alternative selection options are available in the drop-down, but we'll run with the default.

So most selected features match the predicates but two spatially disconnected features were also returned due to a common attribute. So now we'll use the Multipart to Singlepart tool to break the multi-polygons into separate features, running with Selected Features Only.

Now we'll use a slight variation of Select by Location - Extract by Location. Instead of creating feature selections in our input layer, this will generate a new layer. So matching the predicates and comparison layer to those used in Select by Location, we'll click Run. In addition there is also Join by Location, which enables fields from the second layer to be joined to the first according to the predicates and the specified join type – as one-to-one or one-to-many. So these by Location tools enable features to be selected or extracted and field information joined between layers according to their relative spatial distributions.

So now we'll merge the land-cover 2000 layers into one file with the Merge Vector Layers tool. Open the Multiple Selection box and select the four land-cover files. We'll also switch the Destination Coordinate Reference System to WGS84 UTM Zone 14 for spatial analysis. Click run with a temporary file. So merge can be applied to vectors of the same geometry type. It works best when layers contain the same fields and cover distinct yet adjacent areas – making the land-cover layers highly suitable. Two additional fields specifying the originating layer and file path for each of the features is included in the output.

While Merge is running, we'll reproject the watershed layer to the same Coordinate Reference System for consistency in our spatial analysis.

Now we'll join the provided classification guide with the class names to the merged output, using the Joins tab. So code is the Join Field and COVTYPE - the Target Fields. We'll join the Class field and remove the prefix. Now we can run the merged layer through the Fix Geometries tool to accomplish two tasks simultaneously. First it will fix invalid geometries – critical for adding spatial measures and applying geoprocessing tools, while also permanently joining the Class fields. The process may take a few minutes to complete.

 So now we'll rename the Reprojected and Fixed layers to PTWShed for projected tertiary watershed and FMLC2000 for fixed merged land-cover 2000. This will enable us to use the autofill settings to populate the file paths and names when running Clip as a Batch Process. So open Clip from the Toolbox and click Batch Process.

As we've covered, the Clip tool helps standardize the extent of analysis for multiple layers to an area of interest, or reduce processing times and file sizes in a workflow. The inputs can be of any geometry type while the Overlay Layer is always a polygon. Features and attributes that overlap with the Overlay Layer are retained, with the Overlay Layer acting like a cookie cutter on the input.

So select FMLC2000and PTWShedas the input and select CAOIas the Overlay Layer. We can then Copy and paste it into the next row – which we could repeat for as many entries as required. We'll click the plus icon and copy PTWShed for the Input to prepare this layer for an upcoming demo. Here we'll use Manitoba Outline as the Overlay layer. For the output files we'll store them in a Scratch folder, for intermediary outputs in our workflow which can then be deleted at the end of part 2 of the demo. Enter C for the filename, and click Save and then use Fill with Parameter Values in the Autofill settings drop-down. This adds a C prefix to our existing layer names. We'll store the last file in the Geoprocessing folder so that it is retained. Click Run and we'll pick back up once completed. The process takes around five minutes to complete.

So with the clipped layers complete, load them into the Layers Panel. I'll move them back into the Processing Group for organization purposes and then zoom in on the layers.

We can load the provided symbology file to visualize the different land-cover classes.

Then we'll add an area field to the clipped land-cover file. Call it FAreaHA for field area, using a decimal field type with a length of 12 and a precision of 2. We'll reuse these parameters for adding subsequent numeric fields. Enter the appropriate expression - $area divided by 10000.

Now we'll use Select by Expression to isolate 'Water' features using "COVTYPE" = 20 or "Class" LIKE 'Water' – and then click Select Features.

Now we'll generate a Buffer around the selected features to begin creating the Riparian area layer. There are many Buffer tools available in the Processing Toolbox – which we'll demonstrate in Part II – here using the default tool.

We'll check 'Selected features only' box and enter 30 for the distance – a common riparian setback in land-use planning and policies. Change the End Cap Style to Flat and check Dissolve Results, so that any overlapping buffers are merged to avoid conflating total area estimates. Run with a temporary output file. We'll rerun the tool toggling back to the Parameters and changing the distance to 0, to output Water features as their own temporary layer – reducing processing times for the next tool.

Buffer tools can be applied to any vector geometry type. And they are used to assess the proximity of features to those in other layers. We can also use buffers to facilitate combining our geometries and attributes with other layers – like buffering lines or points to use them as a difference layer. The buffer contains the input layer's attributes, which can be used for further analysis. The outputs are often applied with other geoprocessing tools for further examination.

So we'll rename the outputs, naming the first B30W and the second LC2000Water, to facilitate their distinction.

Zooming in on the buffer, the input water features were also included in the output geometry. Since we are not interested in water features but the land-cover conditions around them we'll run the water buffer through the Difference tool using LCWater2000 as the Overlay Layer to retain only the buffered area. So difference is the opposite of Clip – retaining only input features that do not overlap with the Overlay layer. Like Clip – the input can be any geometry type, while the overlay layer is always a polygon. Difference can be used whenever we are interested in features that do not overlap with a specific polygon, such as areas external to a certain drive or distance from hospitals or farm fields, roads or grain elevators not impacted by historical flooding. So click Run and we'll continue once the output is complete.

Toggling the water layer off, we can see that the Difference has retained only our 30 metre buffer. So now we've successfully generated our riparian area layer but need to follow up with the Intersection tool – running it twice to extract watershed codes and land-cover classes to our layer. Intersection retains the overlapping feature geometries of the input layers and any selected attributes of interest in the Fields to Keep parameter. If geometry types differ between layers, the first layer's geometry is used in the output. Thus, Intersection can help combine variables of interest from multiple layers.

For the first run we'll use the Difference and clipped watershed layers as the inputs to assign watershed codes to the riparian buffer. This will enable us to examine land-cover conditions by watershed in Part II of the demo. And for PTWShed check the sub-basin code field in the Multiple Selection box. For the Difference layer, we'll select an arbitrary field for the Fields to Keep parameter – here selecting the "layer" field, clicking OK and then clicking Run. This process takes around 5 minutes and we'll continue when complete.

Within the Attribute Table we can see watershed codes have been successfully assigned to the riparian layer. Now we'll run the tool again, using the intersect as the Input and the clipped land-cover file as the Overlay layer to integrate the land-cover features in the riparian areas. We'll retain the watershed code field from the first layer and the "Class" and "FAreaHA" fields from the land-cover. We'll save it to file, storing it in the main geoprocessing folder and calling it RipLC2000 for riparian land-cover 2000. If the tool fails, use Fix Geometries tool and rerun the Intersection with the fixed outputs. We'll pick back up after the layer is created, which may take up to 20 minutes.

With the riparian land-cover layer loaded copy and paste the style from the clipped land-cover to visualize the different feature classes occupying these areas. Now we've successfully combined the riparian buffer by watershed with the land-cover layer. And for the final component of Part I we'll add four new fields with the Field Calculator, specifically the intersected area in hectares, to determine the area of each land-cover feature within the buffered riparian area. Use the same parameters and expression as applied for creating the FAreaHA field.

So next we'll calculate the percentage of each feature within the 30 metre buffer, to assess the relative distribution of the original features within the riparian setback and isolate any potential violating land-uses. We'll call the field PrcLCinRip, for percent land-cover in riparian area, with the same parameters as the previous fields. Expanding the fields drop-down, we'll divide IAreaHA by FAreaHA and multiply by 100.

The next two fields are to create an identifier which combines the subwatershed codes and land-cover class fields which we'll use to aggregate and assess riparian land-cover by watershed. First is an FID field or FeatureID, which we'll use for the Group_By parameter when using the concatenate function. Leave the parameters in their defaults and double-click the @row_number expression.

Now we can use Concatenate to combine our fields in creating the ID. This is extremely helpful for further processing and analysis, such as distinguishing and rejoining different processed layers to original features or aggregating datasets by different criteria. So we'll change to a text field type with a length of 100 and call it "UBasinLCID".

So type concatenate in the expression box – specifying the function to apply, and then open bracket and double-click SUBBASIN in the fields and values drop-down. Using the separators and adding a dash in single quotes will help separate the codes and class fields for interpretability. As noted, the FID field is used for the Group_By parameter, writing group underscore by, colon, equal sign and then double-clicking the FID field.

We can see the combined fields in the output preview. Given the number of features, the concatenated function can take up to 30 minutes to create. After it's complete, ensure to save the edits to the layer and the project file with a distinctive name for use in Part II of the demo.

(The words: "For comments or questions about this video, GIS tools or other Statistics Canada products or services, please contact us: statcan.sisagrequestssrsrequetesag.statcan@canada.ca" appear on screen.)

(Canada wordmark appears.)

Analysis 101, part 1: Making an analytical plan

Catalogue number: 892000062020009

Release date: September 23, 2020

By the end of this video, you will learn about the basic concepts of the analytical process:

  • the guiding principles of analysis,
  • the steps of the analytical process and
  • planning your analysis.
Data journey step
Analyze, model
Data competency
Data analysis
Audience
Basic
Suggested prerequisites
N/A
Length
8:13
Cost
Free

Watch the video

Analysis 101, part 1: Making an analytical plan - Transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Analysis 101, part 1: Making an analytical plan")

Analysis 101: Part 1 - Making an analytical plan

Hi, welcome to analysis 101 video one making an analytical plan.

Learning goals

By the end of this video you will learn about the basic concepts of the analytical process: the guiding principles for analysis, the steps in the analytical process and planning your analysis. This video is intended for learners who want to acquire a basic understanding of analysis. No previous knowledge is required.

Analysis at your organization

Take a second to think about analysis at your organization. What role does analysis play? Are you and your colleagues producing briefing notes for senior leadership? Are you writing reports for clients or for your website? Are you doing more technical or descriptive work? Does your organization have guiding principles that you should be aware of? You'll be taking these into consideration when you plan your analysis.

Steps in the analytical process

(Diagram of 6 images representing the steps involved in the analyze phase of the data journey where the first steps represent the making of an analytical plan, the middle steps represent the implementation of said plan and the final steps are the sharing of your findings.)

On this slide you can see that there are six main steps in the analytical process and each is related to making a plan, implementing that plan or sharing your findings. We will explain the main activities that you will need to undertake within each step. If you've been watching statistics candidates, data literacy videos, you'll recognize that this work is part of the third step: the analyze phase of the data journey. This diagram is the backbone of our analytical process. We will come back to it in each of the videos in this series.

Step 1: What do we already know?

For this video and planning, your analysis will start by understanding the context and investigating what we already know about a topic. Start by ensuring you fully understand the broader topic and the context surrounding it, and think through the following questions. What do we already know about the topic? Has one of your colleagues already done a similar exercise? Start by reviewing any previous work done on the topic. Once you've read up on the topic, you can identify the knowledge gaps. What is missing in the previous work? This will help you realize how your projects add value.

Example

To make sure you understand these steps, let's go through an example together. This is from a study on the economic well being of millennials. This study was motivated by a lack of information on financial outcomes for Canadian millennials.

Millennials-Context

When we began work for this study, we knew that Millennials were often stereotyped by the media's still living in their parents basement. Spending too much on takeout food, and so on. We also knew that a study by the United States Federal Reserve Board had shown that American Millennials had lower incomes and fewer assets than previous generations had at the same age. What were the knowledge gaps? Despite anecdotal media reports on millennials, we knew that there wasn't a detailed study assessing the economic well being of Canadian millennials.

Millennials-Relevance

Why is our analysis relevant? Well, housing affordability and high debt levels have been identified as concerns for younger generations earlier in their life. From this we knew the topic was relevant for policy makers. Journalists and Canadians will return to this example later.

Step 2: What do we already know?

Back to our analytical process. The next step is to define your analytical question.

What is your analytical question?

How do you define your analytical question? Very clearly state the question you are trying to answer and use plain language simply. This means using vocabulary that an eighth grader could understand. You might have one main analytical question followed by some supporting questions. Why is your question relevant? Why should we care about your work? Define the value that your analysis adds either to your organization, your client, or to our understanding of the topic.

Plan your analysis

Now that you have a relevant question, how will you answer it? This is the perfect time for you to put together an analytical plan which provides a road map for answering your analytical question. You will need to think about the context of your topic and how you will answer your question. What data and methodology are needed to answer your question? You will also need to think about how you will communicate your results, whether through a briefing note, analytical paper, infographic or presentation.

Identify your resources

Now that you have your analytical plan, think about your resources. Feedback is an essential element of your analytical journey and you should leverage input from colleagues at every step. Typically we will put together a short plan for colleagues and management to review. Maybe some of your colleagues have expertise on the topic you are working on. Colleagues might also have expertise in the data you are using or your methodology. Our colleagues are often in the best position to provide tips and feedback and to help us work through problems.

Millennials-Analytical question

For our example about the economic well being of Canadian Millennials. Our main analytical question was: Are Millennials better or worse off than previous generations of the same age in terms of income levels, debts, assets, and net worth? Given the level of interest on Millennials and debt levels, we wrote a short, analytical paper that answered this question.

Your analytical journey

Remember it this way analysis is like taking a canoe trip. You need a good plan. You should map out where you are going and how you will get there. That's your analytical plan. You will also need a strong analytical question, solid data, and good methodology. That's your canoe.

Remember: Define your analytical question

The key takeaway from this video is to remember to develop a clearly defined analytical question even with a great topic and high quality data you cannot produce good results without a well defined question.

Summary of key points

To summarize, the analytical process can be viewed as a series of steps designed to answer a well defined question. Once the topic has been defined, the next step is to create an analytical plan. And always incorporate the feedback you receive during the planning stage of your analytical project. Before the next video, take a few minutes to identify two analytical questions and think through why these questions are relevant for your organization. Stay tuned up next. We'll share tips on how to implement your analytical plan.

(The Canada Wordmark appears.)

What did you think?

Please give us feedback so we can better provide content that suits our users' needs.

Video - Semi-Automated Mapping in QGIS with the Atlas Panel

Catalogue number: Catalogue number: 89200005

Issue number: 2020016

Release date: November 24, 2020

QGIS Demo 16

Semi-Automated Mapping in QGIS with the Atlas Panel - Video transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Semi-Automated Mapping in QGIS with the Atlas Panel")

So following up from Creating Maps in QGIS, today we'll discuss using the Atlas Panel in the Print Layout to rapidly generate multiple maps. The Atlas panel uses a specified 'Coverage Layer' to define the geographies of the outputs. Today we'll use it to map population dynamics in Census Metropolitan Areas - or CMAs for short - across Canada - effectively semi-automating the map production process.

So once again the preparation steps are provided in the video description – but to summarize quickly, we used the one-to-one join procedures to link the population table to the Census Tract layer. The Refactor Fields tool was then applied to save to a permanent file with correctly attributed field types. We also dissolved the Census Tract layer using the Census Metropolitan Area Name (CMANAME) field to create the Coverage Layer for our Atlas, and with the fill colour set to fully transparent, outline the Census Tracts within our main map group.

The cartographic Census Subdivision layer was then run through the Fix Geometries tool, dissolved by the Provincial Unique Identifier and run through the Multipart to Single Part tool, ensuring that all features were separate entries within the attribute table. Area fields were added to it, and the Lakes and Rivers polygon, using the Field Calculator, and the Select by Expression tool used to subset features with areas greater than 2500 and 500 square kilometres respectively and grouped together to comprise our Inset Map.

The Labels applied to our coverage and province layers had similar settings. So to quickly summarize - in the Formatting subtab for the coverage Layer we specified to wrap text on the dash character. A text buffer was also applied – with the coverage layer set to 75% Opacity. The Placement was set to Horizontal (slow) ensuring the legibility of the text-based labels. And in the Rendering tab only draw labels which fit completely within the feature was checked and discourage labels from covering the feature's boundaries was selected with the highest weight applied. So now we can toggle off the Prep Layers group.

Now in the Print Layout I used the add shapes tool, specifically Add Rectangles, to divide the layout for the map items, which were then locked within the Items panel. The alignment tool on the Actions toolbar was then used to ensure that added items were placed above the rectangles. I've also already added many of the mandatory map items – including the additional information, scale bar, legend and title. The title uses an expression, including a generic text prefix within single quotes followed by the vertical separators and then Census Metropolitan Area name field to label by metropolitan area, which will update automatically once our Atlas is generated.

Just a quick tip for map item formatting - if greater control was needed – the Print Screen or Snipping tools could be used to export, externally format and re-add items, such as the legend, diagrams or table, as a picture.

On that note our North Arrow is still missing. So rather than using the add Arrow and Text label function, let's add it from Picture this time. Clicking and dragging across the desired location in the Print Layout, we can then go to the Item Properties Panel and expand the Search Directories. And once loaded select the desired icon. I'll alter the Fill Colour to be darker, to ensure its visibility against the main map, and then clicking Back, we'll also switch the placement to Middle.

So now we can add both maps simultaneously – placing the main map in the larger box of the layout and the inset map in the smaller box on the right. The main map is currently being rendered over our north arrow, so once again we'll select it and use the Lower in the Alignment tools to ensure the north arrow is visible.

While they're rendering I'd also like to highlight that the scale-bar for the main map is set to Fit Segment Width as opposed to the Fixed Width used in the previous demo, which will update the scale-bar according to the size of the census metropolitan area being mapped.

Now for our second map, let's add a Grid, expanding the drop-down and clicking the Plus icon – and then select Modify Grid Properties. We'll also change the CRS to WGS 84, entering 4326 in the system selector, so that we can show coordinates in decimal degrees. We'll specify an interval of 2 degrees, which at the moment adds many lines, but will be more appropriate once we generate the Atlas. Check the Draw Coordinates button. The format of the coordinates can then be selected in the drop-down, here we'll leave it with the default. We'll specify to show latitude only for the Right and Left parameters and longitude only for the Bottom and Top. At the bottom of the panel we'll change the precision for the Grid units to 1.

Back in the Item Properties Panel for the Inset Map, we'll also add an Overview – clicking the plus icon and specifying the map being overviewed in the drop-down, which is Map 1.

Now we can generate the Atlas. So in the Atlas drop-down on the Menu-bar select Atlas Settings. And within the Atlas panel check Generate Atlas and specify the Coverage Layer – in this case DCMAAtlas or the dissolved Census Metropolitan Areas. We'll specify the field used for the Page Name, in this case the CMANAME field and use the same field for Sort by. This will sort them alphabetically when we preview our Atlas. Uncheck the Single Export option, as we want each metropolitan area to be a separate map and select the desired output file format. Additionally we'll change the output filename to something more intuitive than just the feature IDs. So we'll switch 'Output' to JCTPop and then click on the Expression box. In the Variables drop-down, we will replace featureID with @atlas_pagename, which we set to CMANAME field, so the outputs will be named according to the census metropolitan area. This did involve a trade-off– requiring the entries within the CMANAME field to be reformatted – removing periods, slashes, question marks and other symbols that caused erroneous filenames, which would lead to the atlas output failing. So having replaced these characters I can just click OK.

Now we can Select Map 1 and within the Item properties Panel check the Controlled by Atlas box. For the main map we'll specify 5% for the Margins Around the Feature. In the main interface we can now toggle off the inset map group. And back in the Print Layout select the main map and in the Item Properties Panel check Lock Layers and Lock Styles.

Repeating with Map 2, we'll check controlled by Atlas once again and enter a larger margin of 750% to ensure the broader geographic location is shown. Then in the main interface toggle off the main map group and back in the layout select Lock Layers. And this is so that the inset map does not show the layers of the main map and vice versa.

Now on the Atlas toolbar we can select the Preview Atlas icon. So we can toggle through the different CMAs alphabetically, or select ones of interest from the drop-down. The next metropolitan area is Barrie. So as you can see, the title, grid and scale-bar update rapidly, while the maps, particularly the Inset Map takes longer. This is likely due to the detail of the cartographic boundary file, combined with the broader extent being mapped within the inset.

I ran the Atlas output earlier – as we can see scrolling through the maps, most are appropriately formatted and ready for use as supporting figures or stand-alone documents. Relatively few maps require manual editing and individual export to maintain intuitive values for map items such as the scale or grid intervals. For example, for Edmonton we would want to use a larger interval for the Grid coordinates, such as 5°. And similarly, for Granby, we would want to alter the scale to Fixed Width and enter 10 for a more intuitive break value. Then we could use the export procedures from the making maps demo to individually export these specific maps. On the whole, the Atlas Panel has facilitated rapidly mapping multiple locations and variations in attributes of interest with relatively little input or effort.

Toggling back we could now select another CMA of interest, such as Drummondville – or one of the outputs from the Atlas that needed edits such as Edmonton. Then we could select the Inset map, re-expand the Grids drop-down and click the Modify button – updating the Grid Interval for X to 5 degrees. We could then resize the North Arrow to ensure it's not obscuring the main map features, and then individually export this map using the procedures covered in the previous demo.

So with the Export Atlas tool we can specify the for..file format to use. The same formats from the single export options are available. It is a good idea to create a separate directory for the output maps. Then specify the output resolution and click Save to run. We won't actually run the output as it's a time-intensive process, taking around 35 minutes.

The last thing that we would want to do is save the print layout as a template for further use, such as reapplying for map production in the next census collection period or to generate maps with a different variable of interest at the Census Tract level.

So use the Atlas panel with a coverage layer to rapidly and easily generate multiple maps for particular areas of interest. Save the template for re-use in examining other variables of interest or applying in another time-period. Apply these skills to your own areas of expertise and datasets of interest for semi-automated map production.

(The words: "For comments or questions about this video, GIS tools or other Statistics Canada products or services, please contact us: statcan.sisagrequestssrsrequetesag.statcan@canada.ca" appear on screen.)

(Canada wordmark appears.)

Video - Making Maps in QGIS with the Print Layout (Part 2)

Catalogue number: Catalogue number: 89200005

Issue number: 2020015

Release date: November 23, 2020

QGIS Demo 15

Making Maps in QGIS with the Print Layout (Part 2) - Video transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Making Maps in QGIS with the Print Layout (Part 2)")

So using the Layout Manager, we can reopen our Layout from part one. And now we'll cover adding some additional optional map items. When used judiciously, these items can help enhance the interpretability and aesthetic of a map. One of the final procedures we covered in Part I was locking the Layer and Style of Layers for our Main Map, meaning that changes to the main interface will not impact its appearance in the Print Layout.

So the first item that we'll add is the Inset Map. Back in the main interface we'll toggle the main map group off, toggle the inset on and zoom to the provincially aggregated layer. Now back within the Layout we can add the Inset map, with the Add Map to Layout tool – and left-clicking and dragging across for its placement within the Layout.

We'll then add another scale-bar item - for Map 2 this time. Select Numeric from the Format drop-down. And we'll place it below the Inset map, altering the placement parameter in the display drop-down to Center and adjusting the placement within the Layout. To ensure an intuitive and interpretable scale ratio once again we'll enter a fixed scale value for the Inset Map using the Data Defined Override drop-down, in this case entering 55 million in the Expression window.

Now let's add a picture. So with the tool engaged click and drag where it should be placed within the Layout. Now we can load the image from our Directory by clicking the triple dot icon. It can then be resized and placed within the Layout as needed.

For the final optional item, let's add part of an attribute table to the layout. Click the Attribute Table icon and drag in the layout for its positioning. We can then specify which layer to use, selecting our subset layer – JMBCDPop - in the Layers drop-down. Clicking the Attributes box, we can then specify which fields should be included or removed from the table. So we'll remove the Census Division Unique Identifier, Census Division Type, the provincial fields, as well the Total Private Dwelling counts, percent change and area fields. So we'll rename the remaining fields in the Heading column, which can be of any length. So we'll rename them to Name, Population 2011 and 2016 written in full, Percent Change, Density, Rank (CA) for the national level and Rank (MB) for the provincial level. We can then specify the fields to sort the table by. Here we'll use the CDName field. We could also add additional sorting rules, much like in Excel, here using the provincial population rank.

So now that the table is added we can nudge it down within the layout, and as we resize it within the layout the number of features in the table changes. We could also control this using the parameters in the Feature Filtering drop-down.

In the Appearance section, select Advanced Customization. Check even rows and we'll alter the colour formatting to be a light grey to distinguish the individual rows in the table. In the Show Grid drop-down uncheck the Draw Horizontal Lines box.

Now we'll add a Node Item, using the Add Polyline function, to the Print Layout. We'll use the lines to form the horizontal border lines for the attribute table and header row. So left click twice for the beginning and end of the line, and right-click to complete. We'll then edit the length of the line to ensure that it perfectly matches the width of the attribute table. Then we can sse.. copy and paste the first line and place it in the other two locations. Once this is done we can select the items by clicking and dragging over the Layout, and once again using the Group tool on the Actions toolbar and lock their position in the Items Panel.

With all map items formatted, we can now export the map. So the Map can be exported as an image or as a .pdf. The image file format enables it to be rapidly added within a document as a figure or supporting information, while the .pdf can be used to share the map with others in a widely accessible but protected file format. Here we'll export the map as an image. Navigate to the desired directory and provide an output filename. Then we can enter the desired resolution. In general, 300 dots per inch will suffice for most applications. But say we want to include the map on a poster, then we could use a finer resolution of 600 or even 1200 dots per inch as required. Then click Save, and the export procedure takes about a minute to complete.

When it is completed, click the Hyperlinked Filename at the top of the Layout. And then we can open and examine the output map. If we need to make any adjustments we could easily return to the layout, incorporate them and repeat the export procedure.

So in this demo we explored the principles, procedures and tools for creating a map in the Print Layout. Specifically, users should now have the knowledge and skills to: Distinguish mandatory and optional map items; Use available tools in QGIS's main interface and Print Layout to prepare map data Add map items to the Print Layout and alter their properties using available panels, such as using the lock layers and Style functions in the Item Properties panel to add inset maps, and using the group and lock functions in the Items panel to fix item positions in the Layout.

Finally you should also feel comfortable saving and exporting finalized maps. So apply these skills to your own areas of expertise to create well-balanced, easy-to-interpret maps.

(The words: "For comments or questions about this video, GIS tools or other Statistics Canada products or services, please contact us: statcan.sisagrequestssrsrequetesag.statcan@canada.ca" appear on screen.)

(Canada wordmark appears.)

Video - Making Maps in QGIS with the Print Layout (Part 1)

Catalogue number: Catalogue number: 89200005

Issue number: 2020014

Release date: November 23, 2020

QGIS Demo 14

Making Maps in QGIS with the Print Layout (Part 1) - Video transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Making Maps in QGIS with the Print Layout (Part 1)")

Hello everyone. Today we'll learn how to create maps in QGIS using the Print Layout– which is a separate window from the main interface used for mapping. Specifically we'll cover navigating the window, and using its tools and panels; adding map items, and distinguishing those that are mandatory versus optional; and saving and exporting a map.

Cartography, or map-making, blends the science and art of GIS. Maps are powerful tools for conveying information to a wide audience. The creator chooses which features are included, how they are visualized and how to best convey the information. Maps should be intuitive and readily interpretable. Important factors to consider are similar to those introduced in the vector visualization tutorials, including:

What is the main theme or message of the map and who is the target audience?

This helps define essential layers for the map. And generally you should exclude peripheral layers that may overcrowd your map or message.

Second, is the visualization logical and facilitate the distinction of layers or features?

And similarly, is the level of detail or generalization within the layers suited to the map scale?

Finally, have you selected an appropriate projection for the location of your map?

With the joined division and aggregated provincial layers from the one-to-one join by attributes demo loaded in the Layers Panel, division features in Manitoba were selected and subset to a new layer – JMBCDPop - which will be our main map. Selected features were also run through the Dissolve tool using the province name field to create the MB outline layer, which was grouped with the province layer to create the Inset Map. So inset maps just show the broader geographic location and context of a main map. The groups in the Layers Panel will help us add our two maps separately within the Print Layout, important since the Layout is actively tied to the Canvas in the main interface.

So with the map layers created and grouped, we now need to establish our visualizations. Instead of using the Layer Properties Box, today we'll use the Layer Styling panel, right-clicking on an empty toolbar area and selecting it from the drop-down. The panel contains the main visualization tabs from the Layer Property Box, and layers can also be selected from the drop-down at the top – enabling the rapid visualization of multiple layers.

So we'll apply a graduated symbology to the Pop Percent Change field, using the Spectral Colour ramp, Pretty Breaks as the Mode and 8 classes. We'll also change the precision to 1. Then in the Labels tab, we'll once again specify the Percent Population Change field to use for labelling. And in the Formatting subtab check Formatted Numbers and change the decimal places to 1. Since the visualized field includes negative values we could also check 'Show plus sign' if desired, but here we'll leave it unchecked.

We'll add a small text buffer around the labels using the default values. And for the Placement, we'll select Free (Slow). This will rotate the labels to fit them within the feature – but will still be interpretable since we have NEVER selected for show upside down labels in the Rendering tab. We'll also check only draw labels for fit completely within the feature and discourage labels from covering the feature's boundary. If you've noticed I haven't clicked Apply yet, because the Live Update box is checked, meaning the edits are being applied on-the-fly as they're being entered.

Now we'll select the Manitoba outline layer from the drop-down. Switch back to the Symbology tab. Click on Simple Fill and Fill Colour, which will alter the Opacity to 0% or fully transparent. Then we can change the stroke colour to a dark red and enter a width of 0.75 – creating the outline of the main map.

Closing the Styling Panel we'll copy and paste the style from our subset layer – selecting all categories – - to the aggregated province layer to ensure consistent visualization of population changes across the two layers and levels. Now we can toggle off other layers leaving only the main map group. If a bookmark was created it can now be used, or in this case since our main map is one layer, the Zoom to Layer tool. And we can use other zoom tools to refine the scale of the Canvas as needed. The scale value at the bottom of the interface is approximately 1 in 7 million.

So to access the Print Layout window, click the New Print Layout icon on the Project Toolbar. The layout manager icon to the right can be used when there are existing layouts that you want to access for further use. Clicking on the New Layout icon we need to provide a name – which we'll call Making Maps in QGIS. Obviously a more specific title is helpful to distinguish different layouts once multiple maps have been created.

So the Print Layout appears as such. There are a variety of panels on the right-side of the window, the most important being the Item Properties Panel, where all items in the Layout are formatted –defaulting for the selected item. On the left-hand side are a variety of tools to add different map items to the Layout.

So the first mandatory component is the main map – so we'll click the Add New Map icon and then click and drag to place it within the Layout. There are two interaction icons. The Select/Move Item, enabled by default is used to move, place and resize items within the Layout, while the one below it, Move Item Content, applies to Map Items only and can be used to alter the Canvas location and scale from within the Print Layout. Map items may take a moment to update when the formatting parameters are changed.

So here we can specify the properties for Map 1, such as setting the scale to that of the Canvas, or entering a specific value for the scale in the Main Properties drop-down. However, the scale will adjust automatically if the map item is resized, which is not ideal. So to prevent this we can click on the data defined override box, select edit, and enter the desired scale in the expression box, in this case seven and a half million. Now we can adjust the size without it impacting the scale. Engaging the Move Item Content tool we'll move the canvas for our main map feature so that it is fully visible within the Layout. And we'll also enter -5.0 for the Map Rotation to remove the tilted appearance of Manitoba, which is tied to the applied map projection.

The Guides panel can help us place items within the Layout. So we can click the Plus Icon and specify a distance for indentation. The guides then appear as dotted red lines. In addition there are a variety of alignment and distribution tools on the Actions toolbar to facilitate laying out and distributing items to create an aesthetically pleasing, well balanced map.

The second mandatory item is the North Arrow, which can point towards true, magnetic or grid north – particularly relevant for mapping at higher latitudes. So we'll use the Add Arrow function. Left-clicking twice, to define the start and end of the arrow – drawing a vertical line – and then right-clicking to finish. The North Arrow does not automatically point towards North, so expand the Rotation drop-down and enter -5 to match the applied rotation to our Main Map. We'll add a label above it, replacing the default text with capital N. Clicking on the Font box we can alter the size to a more appropriate value, 20 in this case should suffice and the alignment to Center and Middle. Once again we'll rotate the label.

Dragging across the Layout we can select both items and group them using the Group tool on the Actions toolbar. Now we can resize and reposition them within the Layout. In the Items panel we can then toggle Items on and off, as well as lock their position within the Layout. So now clicking and dragging across the Layout only the main map is selected.

The third mandatory item is a scale-bar - enabling real-world distances between features to be approximated from the map. So click the scale-bar icon and click within the Print Layout. We can select the map to which it applies, and the format. Here we'll stick with the default style - single box. Depending upon the map scale we could change the desired units to use in the drop-down. However, for our map kilometers is most appropriate. We'll include 4 segments to the Right of 0, and replace 75 with more interpretable break values, entering 200 in this case. We can use the arrows on the keyboard to nudge items in a direction of interest within the layout to facilitate their positioning.

The fourth mandatory item is a legend to interpret and distinguish the mapped features. By default the legend includes all layers in the Layers Panel. We can include a generic legend title at the top if needed, but here we'll leave it blank. Then we'll uncheck the Auto-Update box to enable the editing functions and ensure formatting changes are retained. We can reorder legend entries with the arrows and use the minus icon to remove them. So here we'll remove the Manitoba outline and aggregated provincial layers. And right-click on our main map group title and select hidden to remove it from the Legend. We can also rename the layers in the Legend Entries drop-down by double left-clicking. So we'll rename JMBCDPop to Percent Changes. We can also edit value ranges from within the Layout by expanding the Layers drop-down and double left-clicking. Here we'll change the upper and lower break values for the legend to less than -4.0 and greater than 14.0. We'll also remove its background and alter its placement to align with the scale-bar and the main map.

The fifth essential component is a title. It should be simple and quickly convey the map content, including the theme, location and level. So here we'll call it Percent Population Changes in Manitoba (2011-2016): Census Divisions. We'll change the font size to 34, specify the alignment and resize the text box accordingly.

So the final mandatory component are some additional text items that specify the map projection, creator and source references - particularly important when the map will be released as a stand-alone document. We can enter the information manually or use expressions to semi-automate the entry of this information. So we'll enter Prepared by: Insert Name or click the Insert Expression button and in the variables drop-down double-click user full name. Then type Projection colon, NAD83 Statistics Canada Lambert open-bracket. Reopening the expression box once again we can double-click project_CRS in the variables drop-down. So as shown, the manually entered information and expressions are being automatically formatted in the Item Properties panel. Then we need to specify the source references, typing datasets accessed from Statistics Canada. Finally, include an additional expression to include the date created and the program used. So we'll use the concat function and commas to separate the different components. So first - open single-quote and type Created on, colon, space close quote, then type todate with $now enclosed by brackets. Add another comma, reopen single quotes, space, type with QGIS, space, close quote and finally double-click @qgis_short_version in the variables drop-down.

So with map 1 formatted and all mandatory components added, we will select the main map and Lock the Layers and Layer Styles in the Item Properties panel so that changes in the main interface do not affect its format or scale in the Layout window. Now we can click the Save icons at the top. The first will save both the project in the main interface as well as the Layout, which we can then access with the Layout Manager for further edits. Conversely the second tool can be used to save a particular Layout as a Template for repeated use, which we'll cover in a follow up demo. So click the first Save Icon, which we'll then access to discuss optional map items in Part II.

(The words: "For comments or questions about this video, GIS tools or other Statistics Canada products or services, please contact us: statcan.sisagrequestssrsrequetesag.statcan@canada.ca" appear on screen.)

(Canada wordmark appears.)

Evaluation of the Census of Agriculture and Innovation in the Agriculture Statistics Program

Evaluation Report

March 2020

The report in short

The Agriculture Statistics Program (ASP) is comprised of an integrated set of components including crop and livestock surveys, farm economic statistics, agri-environmental statistics, tax and other administrative data, research and analysis, remote sensing and the Census of Agriculture (CEAG). The statistical information produced by the CEAG is unique in its ability to provide a comprehensive snapshot of the industry and its people, as well as small area data, both of which are instrumental not only to the agricultural industry, but also for meeting the data requirements of environmental programs, health programs, trade and crisis management. ASP statistical information is used by a wide range of organizations, including different levels of government, not-for-profit and private organizations, academic institutions, and individual Canadians.

This evaluation was conducted by Statistics Canada in accordance with the Treasury Board Secretariat's Policy on Results (2016) and Statistics Canada's Risk-Based Audit and Evaluation Plan (2019/2020 to 2023/2024). The main objective of the evaluation was to provide a neutral, evidence-based assessment of the 2016 CEAG dissemination strategy, and of the design and delivery of the CEAG migration to the Integrated Business Statistics Program (IBSP). The evaluation also assessed projects in the broader ASP, with a focus on projects supporting Statistics Canada's modernization initiative.

The evaluation methodology consisted of a document review, administrative reviews and key informant interviews with Statistics Canada professionals working in the Agriculture Division, and other relevant divisions. Additionally, interviews were conducted with key users and partners external to Statistics Canada. The triangulation of these data collection methods was used to arrive at the overall evaluation findings.

Key findings and recommendations

Census of Agriculture dissemination strategy

CEAG data are used by a wide range of organizations to understand and monitor trends, formulate advice on policies and programs, and address requests from stakeholders. The majority of interviewees were satisfied with the dissemination of the 2016 CEAG and noted it was an improvement over 2011. Data tables were identified as the most used product while other products were relevant but less useful. In terms of timeliness, interviewees were satisfied with the release of the first set of tables (farm operator data - one year after Census Day); however, the timeframe for releasing the remaining two sets of data tables affected their usefulness (2.5 years after Census Day for the last data table release with socioeconomic data). They also noted that there were gaps in cross-analysis with non-agricultural sectors and in emerging sectors. Finally, web tools were not being used because of a lack of guidance on how to use them and how to interpret the data.

The Assistant Chief Statistician (ACS), Economic Statistics (Field 5), should ensure that:

Recommendation 1

For the 2021 CEAG, the Agriculture Division explore ways to improve the timeliness of the last two sets of data tables (historical data, and socio-economic data) and increase cross-analysis with non-agricultural sectors.

Recommendation 2

Web tools include guidance on how to use them and how to interpret data from them. A proactive approach to launching new tools should be taken. Webinars were identified as an effective channel and the use of other channels would allow for even a wider coverage.

Census of Agriculture migration to the Integrated Business Statistics Program

The CEAG migration to the IBSP was proceeding as planned at the time of the evaluation. The transition phase was complete and the integration phase was well underway. Governance structures were in place and deliverables and schedules were being managed effectively. Efforts to resolve issues, such as those related to compatibilities between the Collection Management Portal (CMP) and the IBSP, and the availability of tools and capacity to support data quality assessments, were continuing. The start of the production phase will bring additional risks as new resources become involved and time pressures increase.

The ACS, Field 5, should ensure that:

Recommendation 3

Unresolved issues for the migration to the IBSP, including incompatibilities between the IBSP and the CMP as well as the IBSP processing capacity, are addressed prior to the production phase.

Recommendation 4

Significant risks during the production phase, particularly with regard to data quality assessments and the exercising of roles and responsibilities, are monitored and mitigated.

Projects supporting the modernization initiative

All five projects reviewed were aligned with the modernization pillars and expected results. Most of the projects focussed on increasing the use of data from alternative sources and integrating data. The evaluation found that while governance structures existed and regular monitoring was taking place, project management practices could be strengthened. For example, clearly defined measurable outcomes were often missing, best practices were not being systematically documented, shared or leveraged, and risk management was ad-hoc in some cases. Project management is perceived to be time and resource consuming in an environment focussed on expediency.

The ACS, Field 5, should ensure that:

Recommendation 5

Planning processes for future projects falling outside the scope of the Departmental Project Management FrameworkFootnote 1 include an initial assessment that takes into account elements such as risk, materiality, public visibility and interdependencies. The assessment should then be used to determine the appropriate level of oversight and project management.

Recommendation 6

Processes and tools for documenting and sharing of best practices are implemented and lessons learned from other organizations (internal and external) are leveraged.

What is covered

The evaluation was conducted in accordance with the Treasury Board Secretariat's Policy on Results (2016) and Statistics Canada's Integrated Risk-Based Audit and Evaluation Plan (2019/2020 to 2023/2024). In support of decision making, accountability, and improvement, the main objective of the evaluation was to provide a neutral, evidence-based assessment of the 2016 Census of Agriculture (CEAG) dissemination strategy, and of the design and delivery of the CEAG migration to the Integrated Business Statistics Program (IBSP)Footnote 2. The evaluation also assessed projects in the broader Agriculture Statistics Program (ASP), with a focus on projects supporting Statistics Canada's modernization initiative.

The Agriculture Statistics Program

The mandate of the ASP is to provide economic and social statistics pertaining to the characteristics and performance of the Canadian agriculture sector and its people. It aligns with section 22 of the Statistics Act, which stipulates that Statistics Canada shall "collect, compile, analyse, abstract and publish statistics in relation to all or any of the following matters in Canada: (a) population, (b) agriculture." It also aligns with section 20Footnote 3 of the Statistics Act, which requires Statistics Canada to conduct a CEAG. A CEAG has been conducted nationally and concurrently with the Census of Population since 1951Footnote 4.

According to the ASP Performance Information Profile, the ASP provides data to support and evaluate the fulfillment of requirements or objectives contained in other legislation such as the Farm Products Agencies Act, the Agricultural Products Marketing Act, and the Pest Control Products Act. The ASP also supplies the Canadian System of Macroeconomic Accounts with data required under the Federal-Provincial Fiscal Arrangements Regulations and the International Monetary Fund's Special Data Dissemination Standard.

The ASP includes an integrated set of components that includes crop and livestock surveys, farm economic statistics, agri-environmental statistics, tax and other administrative data, research and analysis, remote sensing and, the CEAG.

The Census of Agriculture

The CEAG collects data on the state of all agricultural operations in CanadaFootnote 5 including: farms, ranches, dairies, greenhouses, and orchards. The information is used to develop a statistical portrait of Canada's farms and agricultural operators. Typically, data are collected on: size of agricultural operation, land tenure, land use, crop area harvested, irrigation, livestock numbers, labour, and other agricultural inputs. Its "whole farm" approach to capturing data directly from agricultural producers provides a comprehensive count of the major commodities of the industry and its people, and a range of information on emerging crops, farm finances, and uses of technologies in agricultural operations.

The objectives of the CEAG are

  • to maintain an accurate and complete list of all farms and types of farms for the purpose of ensuring optimal survey sampling - at the lowest cost and response burden - through categorization of farms by type and sizeFootnote 6
  • to provide comprehensive agriculture information for detailed geographic areas such as counties - information for which there is no other source and that is critical to formulating and monitoring programs and policies related to the environment, health, and crisis management for all levels of government
  • to provide measurement of rare or emerging commodities, which is essential for disease control and trade issues
  • to provide critical input for managing federal and provincial government expenditures in the agriculture sector.

The Agriculture Division of the Agriculture, Energy and Environment Statistics Branch is responsible for the ASP. The division has many long-standing strategic partnerships with key stakeholders and data users, including federal departments and agencies, provincial and territorial agriculture ministries, local and regional governments, farmers' associations, the agriculture industry, universities, and researchers. The division has established forums to obtain feedback on emerging issues and needs. These include the Advisory Committee on Agriculture and Agri-Food Statistics and the Federal-Provincial-Territorial Committee on Agriculture Statistics. Internal governance bodies such as the CEAG Steering Committee are also in place to help direct and monitor implementation.

The Evaluation

The scope of the evaluation was established based on meetings and interviews with divisions involved in the ASP. The following areas were identified for review:

Evaluation issues, Evaluation questions
Evaluation issues Evaluation questions
2016 CEAG dissemination strategy To what extent did the 2016 CEAG dissemination strategy address the needs of key users in the following areas?
  • Timeframe of releases (i.e., for all releases, between each release)
  • Coverage and level of detail
  • Types and formats of products
  • Cross-analysis with non-agricultural sectors
  • Access to data
Design and delivery: CEAG migration to the IBSP To what extent are governance structures for collection and processing (migration to the IBSP) designed to contribute to an effective and efficient delivery of the 2021 CEAG?
ASP projectsFootnote 7 supporting the modernization initiative To what extent are there effective governance, planning and project management practices in place to support modernization projects within the ASP?

Guided by a utilization-focused evaluation approach, the following quantitative and qualitative collection methods were used:

 
Administrative reviews

Review of ASP administrative data on activities, outputs and results.

 
Document review

Review of internal agency strategic documents.

Key informant interviews (external) n=28

Semi-structured interviews with key users from federal departments, provincial and local governments, farm associations, private sector organizations and research institutions.

Key informant interviews (internal) n=14

Semi-structured interviews with individuals working in the Agriculture Division and partner divisions.

Four main limitations were identified, and mitigation strategies were employed:

Limitations, Mitigation strategies
Limitations Mitigation strategies
Because of the large number of users and partners using data, the perspectives gathered through external interviews may not be fully representative. External interviewees were selected using specific criteria to maximize a strategic reach for the interviews. Different types of organizations from a wide range of locations across Canada, and that use CEAG data extensively were selected. Evaluators were able to find consistent overall patterns.
Key informant interviews have the possibility of self-reported bias, which occurs when individuals who are reporting on their own activities portray themselves in a more positive light. By seeking information from a maximized circle of stakeholders involved in the ASP, including the CEAG migration to the IBSP (e.g. the main groups involved, multiple levels within groups), evaluators were able to find consistent overall patterns.
Limited documentation was available on the projects sampled for the evaluation. Key staff working on ASP projects were interviewed and a strategy to gather additional documents during the interview sessions was put in place. Additional interviews were conducted, as needed, to fill the gaps.
The scope of the evaluation related to innovation reflected only a select number of topics (i.e., alignment, project management) rather than the full spectrum of factors which may have an impact. The evaluation methodology was conducted in such a way that other topics related to innovation could be identified and considered.

What we learned

1.1 2016 Census of Agriculture dissemination strategy

Evaluation question

To what extent did the 2016 CEAG dissemination strategy address the needs of key users in the following areas?

  • Timeframe of releases (i.e., for all releases, between each release)
  • Coverage and level of detail
  • Types and formats of products
  • Cross-analysis with non-agricultural sectors
  • Access to data

Summary

To inform the 2021 CEAG dissemination strategy the evaluation assessed the extent to which the 2016 dissemination strategy addressed the needs of key users in different areas. The majority of users considered the 2016 CEAG an improvement compared with the 2011 CEAG and were satisfied with the overall approach taken. However, the evaluation found some areas for improvement, particularly with regard to the timeframe of releases, coverage, and guidance on web tools.

Census of Agriculture data are used for multiple purposes with data tables being the product of choice

CEAG data are used by organizations to portray the agriculture sector in their jurisdiction or sector of the economy. For provincial government departments, their portrait allows them to understand trends within their province and to compare them with other jurisdictions. Subprovincial data are also available for analysis of smaller geographic areas. For farm associations, data allow them to monitor trends within their area of interest. Overall, CEAG statistical information is used for identifying and monitoring trends, providing advice on policies and programs, addressing requests or questions from various stakeholders, and informing internal or public communications.

A large majority of external interviewees mentioned that, in general, the 2016 CEAG products and statistical information shed light on the issues that were important for their organization. The evaluation found that the data tables from Statistics Canada's website were the products of greatest utility to users. In particular, the Farm and Farm Operator Data tables were identified as the products most used. This was especially true for organizations that had internal capacities to conduct their own analysis. The analytical products and The Daily releases were identified as being less useful, but still relevant since they provided a different and objective perspective on specific topics. This was true for other products as well (e.g., maps, infographics) - interviewees responded that they used them only occasionally or rarely but still believed they were useful. Finally, a number of provincial users also mentioned that they received a file containing CEAG statistical information, which helped facilitate their ability to conduct their own analyses.

Table 1: Use of Census of Agriculture products
Products Extensively Occasionally Rarely Don't know
Data tables from the website 17 6 1 0
The Daily releases 9 5 10 0
Boundary files 7 6 11 0
Analytical products 5 12 7 0
Thematic maps 4 8 12 0
Infographics 3 9 12 0
Dynamic web application 3 6 14 1

Besides CEAG statistical information, a majority of users mentioned that they consulted additional sources of information, either from Statistics Canada or other national and international organizations, to fill gaps. This included information on commodity prices, imports and exports of agricultural products, land values, and interest rates. Users consulted international sources to compare data with other countries (e.g., United States and Australia) or to assess global market demand for certain agricultural commodities (e.g., livestock, crops, etc.).

Historical and socioeconomic data tables wanted sooner

Three data table releases took place for the 2016 CEAG: Farm and Farm Operator Data (May 10, 2017 - one year after Census Day); select historical data (December 11, 2017 – approximately one and a half years after Census Day); and a socioeconomic portrait of the farm population (November 27, 2018 – approximately two and a half years after Census Day). It should be noted that the tool used to create the socioeconomic portrait of the farm population was not part of the original scope for the 2016 CEAG but was added later - thus the reason for the relatively late release.

The majority of interviewees believed the time lapse to receive the first set of data tables was satisfactory given the quality of information they received. While they would have welcomed an earlier release, they recognized the level of effort required to produce the information and felt the time lapse was reasonable given the quality of information they received. However, overall, interviewees believed that the time lapse between Census Day and the final data table releases, specifically for the socioeconomic tables, affected the usefulness of the statistical information. In particular, organizations developing policies or programs targeting young farmers, specific population groups, or educational advancements would have benefited from timelier data.

Figure 1: Dissemination schedule (refer to Appendix B for additional details)

May 10, 2016

Launch of the 2016 Cencus of Agriculture and the 2016 Census of Population.

May 10, 2017

The first set of products for the 2016 CEAG - including a Daily release, farm and farm operator data (47 data tables), provincial and territorial trends (11 analytical products) and provincial reference maps (34 maps).

May - June 2017

A series of weekly analytical articles were released covering different topics, including an infographic titled 150 Years of Canadian Agriculture.

September - November 2017

A boundary file and analytical products were released.

December 11, 2017

Select historical data were released.

December 2017 – April 2018

A number of maps along with an analytical article were released.

November 27, 2018

The Agricultural Stats Hub, a dynamic web application, was released as well as a Daily article, 13 data tables and 3 infographics. The application provided a socioeconomic overview of the farm population by linking agricultural and population data.

December 2018 – March 2019

A number of analytical articles were released

July 3, 2019

Last release from the 2016 CEAG

Table 2: Satisfaction with releases
How satisfied are you with the following? Satisfied Somewhat satisfied Not satisfied Unsure
Time lapse between Census Day and first release 15 5 3 1
Time lapse between each release 13 5 2 4
Time lapse between Census Day and release of all data 6 11 4 3

Some interest in preliminary estimates, so long as differences are small

Users were asked about the possibility of releasing preliminary estimates for specific high-level variables. The estimates would differ from the final data released, however no specific examples of variables were provided to interviewees for consideration.

Half of the interviewees were not interested, with a large proportion advising against it. Several explained that the release of preliminary estimates would create confusion within their organizations and they would be required to explain the differences between the preliminary and final data. Most interviewees noted that any policy decision-making and trend analysis would continue to be based solely on final data.

Those who were either "very interested" or "slightly interested" indicated that the difference between the estimates and the final data would need to be small, otherwise they would prefer the status quo.

Some gaps remain

The Agriculture Division has several mechanisms in place to identify information gaps including: regular pre-census cycle consultations, the Federal-Provincial-Territorial Committee on Agriculture Statistics, the Advisory Committee on Agriculture Statistics, and engagement with national farm organizations. Based on these mechanisms, the CEAG builds on the content approved for the previous cycle to better address new and emerging agricultural activities. In addition, projects recently implemented by the Agriculture Division, particularly the Agriculture-Zero project, have filled several gaps (e.g., temporary foreign workers data).

The majority of interviewees were satisfied with the diversity of topics and themes covered. However, a number of information gaps were identified, particularly regarding emerging operations and fast-growing sectors such as organic farming. Additional statistical information and further analysis were also identified related to farm succession, labour (e.g., foreign and contract workers), pesticide use, and new land use categories (e.g., loss of land to urbanization). Additional variables covered over time (i.e., historical data) was also identified as a need. Finally, all interviewees wanted more granular data, although they recognized there are limitations related to confidentiality.

Table 3: Coverage
How satisfied are you with the following? Satisfied Somewhat satisfied Not satisfied Unsure
Types of agricultural operations covered 15 9 0 0
Number of topics or themes covered in each release 15 5 0 4
Cross-analysis with other topics and other agricultural surveys 10 7 2 5

Interviewees also wanted additional cross-cutting analysis between the agricultural sector and other sectors. For the 2016 CEAG, analysis with non-agricultural data, such as technology, innovation, and socioeconomic issues was provided to users. This approach was highly regarded by those interviewed - but they wanted more. Evidence suggests that there is a growing appetite for cross-cutting analysis in areas such as technology, farm profitability, demographic shifts, transportation, and the environment.

Increased guidance on tools is needed

Two web tools were released for the 2016 CEAG: boundary files and the Agriculture Stats Hub. The evaluation found relatively low use of these two products when compared with other products such as data tables. Although some interviewees used the boundary files, the majority rarely did. Similarly, few interviewees used the Agriculture Stats Hub. A lack of guidance on how to use the tools and how to interpret the data were noted as key impediments. The lengthy timeframe for releasing socioeconomic data, which included the Agriculture Stats Hub, was also identified as a factor that limited the use of the Hub.

Although the use of existing web tools for the 2016 CEAG was somewhat limited, a majority of interviewees were interested in having additional web-based tools, such as interactive maps, custom table building, and query tools, which would allow for the increased customization of products. As data tables were the product most used, tools attached to the tables would greatly benefit users. However, guidance and support must accompany the tools, and a more active approach to launching the tools would be recommended.

More prominent communication of methodological information would be useful

Although methodological information is generally available, some interviewees noted that it would be useful to have it more prominently displayed in the products that are released, either in The Daily or as footnotes in the data tables. For example, since definitions used by Statistics Canada may differ from definitions used by farmer associations (e.g., how farm operator counts are calculated), information to explain the differences would be helpful.

Users were aware of releases, and data were accessible

The evaluation found a high level of satisfaction with the accessibility of statistical information even though Statistics Canada's website was identified as being a challenge. A high level of satisfaction was also reported for any custom data received. Interviewees were highly satisfied with the time lapse between first contact with Statistics Canada and the delivery of the product, the quality of the product, and the level of detail provided.

In terms of awareness of releases, the majority of interviewees stated that they were informed far enough in advance and were satisfied with the channels used. Most interviewees identified reminder emails as the most effective channel for being kept informed about releases.

Table 4: Notification of releases
Best way to be informed of releases Number of Respondents
Reminder emails 20
Calendar invites 7
Webinars 5
Social media posts 3

In addition, those who participated in webinars were very satisfied since the webinars provided additional information on the data available and major trends observed. Webinars were identified as opportunities to raise awareness of the products and data that will be available and to facilitate interpretation of the data and the use of the web tools.

1.2 Design and delivery: Census of Agriculture migration to the Integrated Business Statistics Program

Evaluation question

To what extent are governance structures for collection and processing (migration to the IBSP) designed to contribute to an effective and efficient delivery of the 2021 CEAG?

Summary

The evaluation assessed whether the governance structures associated with the CEAG's migration to the IBSP - including roles and responsibilities, interdependencies, and project management practices - will contribute to an effective and efficient delivery of the 2021 CEAG. The evaluation found some areas of risk that could have a negative impact on the delivery of the 2021 CEAG.

Migrating the Census of Agriculture to the Integrated Business Statistics Program is expected to create benefits

At the time of the evaluation, the Agriculture Division had already successfully migrated all of its surveys to the IBSP. The last component to be migrated is the CEAG; migration work began in fiscal year 2018/2019 and is expected to continue until fiscal year 2022/2023. The IBSP migrations are conducted in three phases: transition (defining program-specific requirements), integration (development and testing activities), and production (collection and processing tasks are implemented through the IBSP).

Because of its five-year cycle, the CEAG is considered an ever-migrating component to the IBSP. Similar to the divisional surveys that have already been migrated, it is expected that migration of the CEAG to the IBSP will create specific benefits for the CEAG:

  • reduced number of systems for collection, processing and storage through the adoption of common tools and statistical methods
  • facilitated integration and harmonization of data with all programs in the IBSP, including agriculture surveys
  • increased corporate support for systems, particularly when significant changes occur (e.g., cloud technology)
  • a more targeted approach for collection (i.e., follow-up operations) through the IBSP's Quality Indicators and Measures of Impact (QIMI) feature.

Roles and responsibilities are a risk during the production phase

For previous CEAG cycles, the Agriculture Division was responsible for designing, planning, implementing, and managing all required tasks, such as content determination, collection, processingFootnote 8, data quality assessmentFootnote 9, and dissemination. The migration to the IBSP for the 2021 cycle will change the governance of collection and processing tasks (and associated roles and responsibilities) because the Enterprise Statistics Division (ESD) is responsible for managing the IBSP.Footnote 10

The shift of processing responsibilities to ESD affect the CEAG team since it will now act only in an advisory capacity for this task, rather than being fully responsible for it. The same structures for the overall management of the CEAG will remain within the Agriculture Division while the migration to the IBSP brings in governance structures already established within ESD.Footnote 11

The evaluation found that the early part of the transition phase was challenging for the CEAG team as they were not familiar with the implications of migrating the processing activities to a different system run by another division. The CEAG's management team and ESD were key in resolving early challenges in the transition. In particular, both groups showed leadership in explaining potential benefits and impacts of the migration while ensuring that roles and responsibilities were well communicated and understood. Governance structures are also adequate. The leadership demonstrated during the transition phase facilitated the start of the second phase of the project – the integration phase.

The evaluation found that there are concerns regarding roles and responsibilities during the production phase as new individuals, such as subject-matter experts within the Agriculture Division and the IBSP production team within ESD, become involved while others leave the project as the integration phase ends. Based on previous survey migrations to the IBSP, the roles and responsibilities during the production phase are typically less clear than during previous phases. To help with this, a good practice identified during interviews is the involvement of production staff during the integration phase to help build continuity and understanding – this took place when the surveys conducted by the ASP were migrated to the IBSP. With the CEAG, most of the divisional staff participating in the integration phase are also part of the team for the production phase.

ESD's Change Management Committee, which is responsible for the triage of required changes, will be involved during the production phase. Consideration of escalation processes is required when multiple committees (i.e., the CEAG Steering Committee, the IBSP Project Management Team and the Change Management Committee) are involved in the decision-making process, particularly during crunch periods typically observed in the production phase. Although the change management process has been defined and cross-membership within committees and working groups was identified as a mitigating factor, the risk of ineffective and inefficient decision-making because of an increased number of governing bodies remains.

Deliverables and associated schedules are well-managed

The migration of the CEAG to the IBSP is managed by the existing working groups. So far, for the transition and integration phases, effective practices were in place to manage deliverables, associated schedules, and outstanding issues. The transition phase, led by ESD in collaboration with the Agriculture Division, worked as planned. Activities for the integration phase, which were being implemented at the time of the evaluation, were also working as planned. The current IBSP integration schedule is seen as robust and includes the first set of processing activities. The schedule for the production phase is in place and is reviewed and updated regularly. While deliverables, schedules, and outstanding issues are being managed effectively, the differentiation between outstanding issues and risks inherent to the migration, particularly for the production phase, have yet to be clearly articulated.

In addition to the IBSP migration schedules, two other schedules come into play. As collection for the 2021 Census of Population and the 2021 CEAG are conducted in parallel, the Census of Population schedule is a crucial element for the development and implementation of the CEAG's internal schedule. All three schedules have varying levels of flexibility: the Census of Population schedule is inflexible and the CEAG schedule is flexible while the IBSP, given its focus on collection and processing, is considered to be moderately flexible. The Agriculture Division is the main conduit for the alignment of all schedules. Requirements from the Census of Population schedule are assessed on a continuous basis and discussions are held with ESD, as needed, to modify the IBSP schedule and the CEAG schedule. At the time of the evaluation, no major changes to the schedules were required, but it is expected that shifts will occur during the production phase. JIRA, which is the system used for change management (e.g., outstanding issues, schedules, deliverables) by the CEAG, the IBSP, and the Census of Population is seen as an effective tool.

Incompatibilities between the Collection Management Portal and the Integrated Business Statistics Program to be resolved

A unique approach for data collection will be used for the 2021 CEAG - different from the one used for the 2016 CEAG and different from other Statistics Canada surveys that have migrated to the IBSP. For the 2016 CEAG, collection was under the responsibility of the Agriculture Division and took place through the Collection Management Portal (CMP) – a shared collection platform with the Census of Population. The CEAG team was responsible for monitoring collection and the management of follow-up operations. Because of synchronicity with the Census of Population, the CMP will continue to be used for CEAG collection in 2021.

Surveys under the IBSP (which are business-focused in nature) are typically collected through a different platform, the Business Collection Portal. New and unique linkages between the CMP and the IBSP need to be designed, tested, and operationalized for the 2021 CEAG collection operations. Links were still under development at the time of the evaluation. Although some functionalities are now operational, there is still development work to be done. For example, paradata from the CMP (e.g., information related to the collection process, such as attempts to contact someone, comments provided to an interviewer, completion rate) were not compatible with the IBSP at the time of the evaluation. Although work is being done to resolve the issue, the incompatibility of CMP paradata would disable the IBSP's QIMI feature, which allows for a targeted process for follow-up operations (i.e., prioritizing follow-up operations to target units that have the most effect on the data). QIMI is an effective tool used to support data quality, the 2021 CEAG data quality assessment strategy would need to be adapted should it not be available. There is a risk that some of the relationships between the CMP and the IBSP will not be fully developed or tested in time for the production phase.

Integrated Business Statistics Program processing capacity and available tools will affect data quality assessment activities

Data quality assessment activities will remain under the responsibility of the Agriculture Division. While the IBSP is designed for data collection and processing, it also includes features supporting data quality assessments, such as QIMI, rolling estimates, inclusion of non-response variances, and values attributed by imputation. However, given the volume of data with the CEAG, some validation processes will not be usable, and alternative tools outside the IBSP will need to be developed and tested. The use of alternative tools will require a reconfiguration of the data quality assessment strategy.

Although the generation of rolling estimates is seen as an important step to ensuring data quality, concerns were raised about the IBSP's processing capacity. In previous cycles, the CEAG team was able to impute, run, and analyze data at a higher rate than is currently possible under the IBSP. As data change from the generation of a rolling estimate and its completion, there is a concern that subject-matter experts will be validating outdated data. Although the IBSP's processing capacity has improved since the start of the CEAG migration and further improvements are expected, concerns remain.

Lessons learned from past migrations to the IBSP suggest that the level of effort required for data quality assessments is either similar to or greater than what is typical. A number of large surveys that have migrated to the IBSP in the past have encountered delays during the production phase because of challenges associated with the data quality assessment task.Footnote 12 At the time of the evaluation, the data quality assessment strategy was being developed.

Migration will benefit from an extended timeframe and experience

Migration activities started in fiscal year 2018/2019 and will continue until 2022/2023. Testing activities will also continue, as needed, during collection. The extended timeframe available for testing (because of the CEAG's five-year cycle) will allow for additional testing activitiesFootnote 13 (i.e., with simulated data, actual data from the 2016 CEAG, and data from content tests conducted in 2019). However, the production phase will be implemented with real 2021 data, with no options available for parallel testing.

Finally, since approximately 30 surveys from the Agriculture Division have already migrated to the IBSP, expertise has been built within the division and ESD. Knowledge gained from previous experience will contribute to the successful migration of the CEAG.

Additional pressures may affect the migration

A risk that could affect the CEAG's migration to the IBSP is the move to cloud technology. Although there are no specific scheduled implementation dates for Statistics Canada programs, any move of IBSP components to the cloud technology during the production phase, where most testing will have been completed, would affect the migration. At the time of the evaluation, this topic was still under discussion.

Another element that is noted for every CEAG cycle is the timeliness of content approval. Any changes to content will affect various elements, including the questionnaire and systems.

1.3 Agriculture Statistics Program projects supporting the modernization initiative

Evaluation question

To what extent are there effective governance, planning and project management practices in place to support modernization projects within the ASP?

Summary

The evaluation reviewed a sample of ongoing and completed projects undertaken within the ASP to examine their relationship to Statistics Canada's modernization pillars and expected resultsFootnote 14, and to identify areas for improvement regarding governance, planning and project managementFootnote 15 practices. The evaluation found that the projects were aligned with the modernization initiative and that governance is in place, but project management practices could be improved.Footnote 16

Projects are aligned with the modernization pillars and expected results

Statistics Canada's modernization initiative supports a vision for a data-driven society and economy. The modernization of Statistics Canada's workplace culture and its approach to collecting and producing statistics will result in "greater and faster access to needed statistical products for Canadians."Footnote 17 Five modernization pillars along with expected results have been articulated to guide the modernization initiative (Figure 2).

Figure 2: Statistics Canada modernization initiative

Figure 2: Statistics Canada modernization initiative
Description for Figure 2: Statistics Canada modernization initiative
The Vision: A Data-driven Society and Economy

Modernizing Statistics Canada's workplace culture and its approach to collecting and producing statistics will result in greater and faster access to needed statistical products for Canadians. Specifically, the initiative and its projects will:

  • Ensure more timely and responsive statistics – Ensuring Canadians have the data they need when they need it!
  • Provide leadership in stewardship of the Government of Canada's data asset: Improve and increase alignment and collaboration with counterparts at all levels of government as well as private sector and regulatory bodies to create a whole of government, integrated approach to collection, sharing, analysis and use of data
  • Raise the awareness of Statistics Canada's data and provide seamless access
  • Develop and release more granular statistics to ensure Canadians have the detailed information they need to make the best possible decisions.
The Pillars:

User-Centric Delivery Service:

  • Users have the information/data they need, when they need it, in the way they want to access it, with the tools and knowledge to make full use of it.
  • User-centric focus is embedded in Statistics Canada’s culture.

Leading-edge Methods and Data Integration:

  • Access to new or untapped data modify the role of surveys.
  • Greater reliance on modelling and integration capacity through R&D environment.

Statistical Capacity Building and Leadership:

  • Whole of government, integrated approach to collection, sharing, analysis and use of data.
  • Statistics Canada is the leader identifying, building and fostering savvy information and critical analysis skills beyond our own perimeters.

Sharing and Collaboration:

  • Program and services are delivered taking a coordinated approach with partners and stakeholders.
  • Partnerships allow for open sharing of data, expertise and best practices.
  • Barriers to accessing data are removed.

Modern Workforce and Flexible Workplace:

  • Organization is agile, flexible and responsive to client needs.
  • Have the talent and environment required to fulfill our current business needs and be open and nimble to continue to position ourselves for the future.
Expected Outcome

Modern and Flexible Operations: Reduced costs to industry, streamlined internal processes and improved efficiency/support of existing and new activities.

Most of the projects examined focus on increasing the use of administrative data and integrating data into centralized systems. The evaluation selected a sample of projects through an objective methodology using the following criteria: level of priority for the ASP, budget, expected impact (e.g., data users, respondents, data quality, and costs) and the perceived contribution to modernization. Additional criteria, such as length, start date, and project stage were also considered. Based on this methodology, five projects were selected:

  • The Agriculture-Zero (Ag-Zero) project is a 7-year project which received funding commencing fiscal year 2019/2020. It is designed to reduce response burden by replacing survey data with data from other sources. The purpose of AG-Zero is to undertake multiple pilot projects involving the acquisition and the extensive use of satellite imagery, scanner and other administrative data, and models to serve as inputs to the ASP in place of direct data collection from farmers. The project aims to reduce response burden on farmers to as close to zero as possible by 2026, while maintaining the amount and quality of information available.

    The project adopts a "collect once, use multiple times" approach. Administrative data will be used to directly replace survey data, to model estimates that are currently generated using survey data, and to produce value-added statistical products for stakeholders. Under the umbrella of Ag-Zero, a series of subprojects are planned to be implemented over the seven year periodFootnote 18; at the time of the evaluation, three had been initiated. The following two were selected for review:

    • Pig Traceability uses administrative data to model estimates of pig inventories and has the potential to replace biannual survey estimates with near real-time estimates. The source data are pig trace data collected under the Health of Animals Act.
    • In-season Weekly Yield Estimates uses a combination of satellite imagery, administrative data from crop insurance corporations, and modelling to create in-season estimates of crop yields and area.
  • The Agriculture Taxation Data Program (ATDP) is being redesigned to move from a survey-based to a census-based activity that uses tax records to estimate a range of financial variables including revenues, expenses, and income. The ATDP's redesign to a census-based activity will support replacement of financial data in the CEAG.
  • The Agriculture Data Integration (Ag-DI) project will integrate agriculture commodity surveys that require processing outside the IBSP into the existing Farm Income and Prices Section Data Integration (FIPS-DI) system. The system will combine data from over 100 sources to produce aggregate integrated data for the System of National Accounts. The project will involve the integration of a multitude of spreadsheets and other systems into one common divisional tool. The project will also update the formulas in FIPS-DI to accept the naming convention used by the IBSP or other data sources loaded directly to FIPS-DI. It is expected that cross-analysis between data-sets will be facilitated, particularly when the CEAG will be migrated to the IBSP.
Table 5: Overview of the innovative projects selected, and alignment with modernization pillars
Project Timeframe Alignment with modernization pillars
Ag-Zero
Budget - $ 2.8M
Start: 2019/2020
Length: 7 years
Stage: Planning

Leading-edge methods & data integration: This project involves the use of new sources of data and new methods for collecting data. Extensive use of modelling, machine learning, and data integration are also featured.

Sharing and collaboration: A key element of this project involves the establishment and maintenance of mutually beneficial partnerships with other federal departments and industry associations.

User-centric service delivery: This project is expected to yield improvements in data quality and timeliness of data releases, as well as offer opportunities for new products.

Pig Traceability (AG-Zero sub-project) Start: 2018/2019
Length: 1 to 3 years
Stage: Execution
In-Season Weekly Yield Estimates(AG-Zero sub-project) Start: 2019/2020
Length: 1 to 3 years
Stage: Initiation
Redesign of the ATDP
Budget - $ 1M (approx.)Footnote 19
Start: 2015/2016
Length: 3 to 5 years
Stage: Close-out

User-centric service delivery: Consultations were held with Agriculture and Agri-Food Canada (AAFC) on priorities for the project. AAFC is the primary client and sponsor of the project.

Leading-edge methods and data integration: This project relied heavily on modelling and the integration of agriculture data (CEAG) and tax data.

Ag-DI
Budget - $ 696K (approx.)
Start: 2015/2016
Length: 3 to 5 years
Stage: Execution
Leading-edge methods and data integration: This project features data integration from an operational point of view. The integration will affect efficiency, data quality, and coherence of the data. It will also enable further cross-analysis opportunities.

Governance is in place

Overall, the evaluation found that governance structures are in place to support the projects. Similarly, schedules are developed and regular meetings take place to monitor progress, budgets and outstanding issues.

The projects employ different governance structures. AG-Zero is monitored under the Departmental Project Management Framework (DPMF)Footnote 20 and started on April 1, 2019. The first three years of the project are funded by a modernization investment submission while the final four years will be self-funded with savings realized through the first set of subprojects. Some of the subprojects under the AG-Zero umbrella have additional dedicated funding.

For AG-Zero, as required under the DPMF guidelines, detailed project planning documentation is in place including a project charter, Project Complexity and Risk Assessment, and an IT Development Plan. Monthly dashboards are provided to the Departmental Project Management Office (DPMO), reporting on various aspects including timelines, deliverables, expenditures, and risks. Within the division, a governance structure exists that includes working groups, divisional management, and the CEAG Steering Committee. AG-Zero subprojects are managed through the same governance structure. They are discussed within the division, and updates on elements such as deliverables, risks, and schedules are rolled-up into the AG-Zero monthly dashboard, as needed.

The Ag-DI project is also a DPMF project. It is small in scope with one resource working fulltime and no non-salary investment. Oversight and reporting are via the standard governance structure for the Agriculture Division and the DPMF.

Project management for the ATDP takes place via the regular divisional governance structure; it is not a DPMF project. Evidence indicates that project management has improved over time (e.g., budget planning, schedules, assumptions, governance, roles and responsibilities) and that at the time of the evaluation, adequate governance was in place and the project was on track to meet its overall objectives.

Risk assessments are conducted on an ad hoc basis

Risks for AG-Zero as a whole were identified at the outset of the project and are monitored every month as per DPMF requirements. Risks at the subproject level are meant to be rolled-up to inform risk management at the AG-Zero project level. While project-specific risks are identified and entered into JIRA during regular team lead meetings, there is little evidence that initial risk assessments were conducted for the subprojects. As AG-Zero is the sum of its subprojects, informal risk management at the sub-project level limits the effectiveness of risk management.

For example, the interruption of reliable access to administrative data (short-term or long-term) has been identified as a risk for the AG-Zero project overall. The division has developed mitigation and contingency options, including the feasibility or practicality of remaining "survey ready" in the case that this risk materializes. Because the risk has not been fully assessed at the subproject level, the management of this risk is limited. Similarly, risk management for other non-DPMF activities is taking place on an ad hoc informal basis.

Quantifiable objectives and targets are missing

The projects examined have the potential to advance innovation in important areas such as data collection, processing, analysis, and dissemination. The evaluation found that clearly defined, quantifiable expected outcomes have not been articulated in most cases. There is a general understanding of what types of positive effects these projects "might" generate, but there are few specific objectives that quantify the expected level of improvement in areas such as data quality, cost efficiency, response burden, timeliness, or relevance.

For example, while it is generally assumed that the integration of data from alternative sources will eventually lead to savings in data collection costs, there are no documented expectations for what the level of savings will be and when they will be realized. This is especially true for the subprojects under the AG-Zero umbrella. The AG-Zero project, which has a hybrid funding scheme (i.e., approved funding during the first three years, and self-funding for the remaining four years), does not have a clear plan to identify and measure returns on investment.Footnote 21

Finally, the measurement of returns on investment should be thorough and comprehensive. For example, as data from alternative sources are acquired from external sources in exchange for some type of service (such as data cleaning or preparation), the associated cost of the service must be considered. Non-payment for the administrative data does not mean they are free; there is still a cost for the "quid pro quo" service that must be accounted for. Similarly, associated costs for remaining "survey ready" while using administrative data (i.e., the mitigation strategy implemented for the risk associated with the accessibility of administrative data) should be accounted for.

The establishment of overall performance indicators for projects and for key milestones during the timeline of the project is critical for monitoring the progress of the work and, ultimately, for measuring the return to the agency for the initiative. The return can be in the form of data quality improvements, cost reduction, reduction of response burden, improvements to data access and availability, or any other improvement realized by the agency.

Best practices could be better leveraged

The evaluation found little evidence that best practices and lessons learned from the projects are being shared (or were planned to be shared) outside of the division; nor did it appear that the projects took advantage of experiences acquired by other divisions.Footnote 22 Lessons learned and best practices were not being documented. Instead, they were being deferred until there was "more time."

While minimal effort was made in this regard, staff recognized the importance of sharing and benefiting from others and that sharing and using best practices could be improved. Staff were also aware of channels for this purpose, such as the Innovation Radar and the Economic Statistics Forum. In November 2019, the division provided an overview of the Geospatial Statistics Framework (a system built to view and analyze geospatial data) at the Economics Statistics Forum.

When asked about ways to enhance information sharing, a number of suggestions were provided: encourage the use of existing corporate mechanisms such as the Innovation Radar; develop a user-friendly open corporate platform where more detailed information about initiatives organized by themes, including contact information, could be housed; involve partner areas such as the Finance, Planning and Procurement Branch, the Informatics Branch, and the Modern Statistical Methods and Data Science Branch (which support different projects for sound statistical approaches) at the outset of a new initiative since these groups have a corporate perspective of innovative projects.

Focus is on expediency

The level of project management typically reflects several factors including risk, materiality and interdependencies. The evidence suggests that timeliness for delivering results is given the highest priority for the projects and that project management is viewed as being a time consuming onerous task that slows things down. As such, minimal effort is placed on activities such as conducting formal risk assessments, identifying quantifiable goals, undertaking cost-benefit analyses, and sharing best practices (as well as learning from experiences of other divisions). An appropriate balance is missing. 

How to improve the program

2016 Census of Agriculture dissemination strategy

The Assistant Chief Statistician (ACS), Economic Statistics (Field 5), should ensure that:

Recommendation 1:

For the 2021 CEAG, the Agriculture Division explore ways to improve the timeliness of the last two sets of data tables (historical data, and socio-economic data) and increase cross-analysis with non-agricultural sectors.

Recommendation 2:

Web tools include guidance on how to use them and how to interpret data from them. A proactive approach to launching new tools should be taken. Webinars were identified as an effective channel and the use of other channels would allow for even a wider coverage.

Design and delivery: Census of Agriculture migration to the Integrated Business Statistics Program

The ACS, Field 5, should ensure that:

Recommendation 3:

Unresolved issues for the migration to the IBSP, including incompatibilities between the IBSP and the CMP as well as the IBSP processing capacity, are addressed prior to the production phase.

Recommendation 4:

Significant risks during the production phase, particularly with regard to data quality assessments and the exercising of roles and responsibilities, are monitored and mitigated.

Agriculture Statistics Program projects supporting the modernization initiative

The ACS, Field 5, should ensure that:

Recommendation 5:

Planning processes for future projects falling outside the scope of the Departmental Project Management Framework include an initial assessment that takes into account elements such as risk, materiality, public visibility and interdependencies. The assessment should then be used to determine the appropriate level of oversight and project management.

Recommendation 6:

Processes and tools for documenting and sharing of best practices are implemented and lessons learned from other organizations (internal and external) are leveraged.

Management response and action plan

Recommendation 1:

For the 2021 CEAG, the Agriculture Division explore ways to improve the timeliness of the last two sets of data tables (historical data, and socio-economic data) and increase cross-analysis with non-agricultural sectors.

Management response

Management agrees with the recommendation.

For the 2016 Census of Agriculture, no funding was provided for the creation and release of the socioeconomic portrait of the farm population; as such, it was not part of the original scope for the 2016 dissemination plan but was added later. The tool used to create the socio-economic dataset from the 2016 CEAG (dealing specifically with the linkage between the Censuses of Agriculture and Population) is specifically in-scope as a deliverable for the 2021 Census of Agriculture.

The 2021 CEAG dissemination strategy and release schedule will be presented to the CEAG steering committee for review and approval. Related processes for the release of selected historical farm and farm operator data will also be reviewed and the timeline for releases will be adjusted based on feedback from the Federal Provincial Territorial partners (key users of the data).

Agriculture Division has already taken steps to increase cross-sectoral analysis with non-agricultural sectors, including the infographics on:

  1. Which came first: The chicken or the egg? Poultry and eggs in Canada
  2. Thanksgiving: Around the Harvest Table.

The CEAG will continue to build on this initiative by developing cross-sectoral infographics, analytical studies, Daily releases and interactive data visualization for the 2021 CEAG data release.

Deliverables and timelines

The Assistant Chief Statistician, Economic Statistics (Field 5) will ensure the delivery of

  1. The approved Dissemination strategy (December 2020)
  2. A proposal for cross-analysis such as Infographics, analytical studies, and Daily releases integrating CEAG data with data from other sectors (March 2021)
  3. A proposal for new interactive visualization tools within the Agriculture Stats Hub (March 2021).

Recommendation 2:

Web tools include guidance on how to use them and how to interpret data from them. A proactive approach to launching new tools should be taken. Webinars were identified as an effective channel and the use of other channels would allow for even a wider coverage.

Management response

Management agrees with the recommendation.

YouTube tutorial videos on how to use Statistics Canada geographic boundary files using open source GIS software (QGIS) have been produced and added to the Agriculture and Food portal.

The CEAG will create "How to" instructions and demos on how to use the interactive visualization web tools. The "How to" instructions will be available within each tool and the demos will be presented to data users in a series of webinars planned for the 2021 CEAG releases.

Deliverables and timelines

The Assistant Chief Statistician, Economic Statistics (Field 5) will ensure the delivery of

  1. A proposal for new interactive visualization tools within the Agriculture Stats Hub, with integral "How to use" instructions and webinar demos (March 2021).

Recommendation 3:

Unresolved issues for the migration to the IBSP, including incompatibilities between the IBSP and the CMP as well as the IBSP processing capacity, are addressed prior to the production phase.

Management response

Management agrees with the recommendation.

The CEAG will continue to work with partners to identify relevant and emerging issues related to the migration to the IBSP during the integrated testing commencing June 2020. Issues will be captured in JIRA and major risks entered in the CEAG risk register. Consolidated risks and issues will be tracked and actioned in project plan documentation.

The integrated testing will take place over several months. All relevant and emerging issues must be resolved by December 2020 to ensure the readiness of production activities.

Issues and risks to be monitored through the CEAG Steering Committee.

Deliverables and timelines

The Assistant Chief Statistician, Economic Statistics (Field 5) will ensure that relevant and emerging IBSP issues and risks are tracked consistent with the DPMF (December 2020).

Recommendation 4:

Significant risks during the production phase, particularly with regard to data quality assessments and the exercising of roles and responsibilities, are monitored and mitigated.

Management response

Management agrees with the recommendation.

A table top exercise will be conducted to identify potential gaps in the processes in place (including risk management) for the production phase. Information gathered during the exercise will be used to inform plans and develop potential contingencies. Results will be presented to the CEAG Steering Committee.

The CEAG will engage the IBSP and all its stakeholders ("SWAT" team) in convening meetings to communicate relevant and emerging issues and risks during the production phase and to find resolutions. Roles and responsibilities will be formally documented and presented at the CEAG Steering committee.

The SWAT team will be ready for the production phase.

Deliverables and timelines

The Assistant Chief Statistician, Economic Statistics (Field 5) will ensure the delivery of

  1. The results from table top exercise (December 2020)
  2. The CEAG "SWAT" team with documented roles and responsibilities (March 2021).

Recommendation 5:

Planning processes for future projects falling outside the scope of the Departmental Project Management Framework include an initial assessment that takes into account elements such as risk, materiality, public visibility and interdependencies. The assessment should then be used to determine the appropriate level of oversight and project management.

Management response

Management agrees with the recommendation.

A new process will be implemented (for both subprojects under AG-Zero and non-DPMF projects) that will require the development of a project plan prior to the launching of a new project. The plan will include among other things: an initial assessment of the issues and risks (and mitigation strategies); a description of the methodology and assumptions; the identification of interdependencies and expected outcomes; and communication plans. The monitoring of projects will take place through existing governance mechanisms. Finally, existing projects already underway will be subject retroactively to the new process.

Where relevant, the plans will be used to update the DPMF project issues and risks register and the DPMF Project Plan.

Deliverables and timelines

The Assistant Chief Statistician, Economic Statistics (Field 5) will ensure the delivery of a new project plan process (June 2020).

Recommendation 6:

Processes and tools for documenting and sharing of best practices are implemented and lessons learned from other organizations (internal and external) are leveraged.

Management response

Management agrees with the recommendation.

The Agriculture Division has already shared lessons learned and best practices through various mechanisms including:

  1. a presentation at AAFC on producing crop yield estimates using earth observation and administrative data on March 14, 2019
  2. a presentation at the Economic Statistics Forum on November 12th, 2019
  3. a presentation at AAFC on February 7th, 2020, on Predicting the Number of Employees using Tax Data.

As part of the new project plan process outlined previously, the Agriculture Division will leverage lessons learned from other organizations where applicable. In addition, as part of ongoing monitoring, lessons learned and best practices from projects will be documented.

Deliverables and timelines

The Assistant Chief Statistician, Economic Statistics (Field 5) will ensure the delivery of:

  1. A systematic approach to share and document lessons learned (December 2020)
  2. A presentation(s) at conferences such as the Economic Statistics Forum (March 2021)
  3. An article(s) in @StatCan or the Modernization bulletin (March 2021)
  4. A presentation(s) at AAFC (March 2021).

Appendix A: Integrated Business Statistics Program (IBSP)

The IBSP provides a standardized framework for surveys with common methodologies for collection and processing. Through standardization and use of corporate services and generalized systems, the program optimizes the processes involved in the production of statistical outputs; improves governance across all areas involved in statistical data output, particularly for change management; and modernizes the data processing infrastructure. This is achieved by balancing the development of a coherent standardized model with the maintenance of flexible program-specific requirements. It is expected that the IBSP surveys will use:

  • the Business Register (BR) as a common frame;
  • harmonized concepts and content for questionnaires;
  • electronic data collection as the principal mode of collection;
  • shared common sampling, collection and processing methodologies;
  • common tools for data editing and analysis; and
  • the tax data universe for estimating financial information.

Appendix B: List of products released (2016 CEAG)

Table 6: List of products released (2016 CEAG)
Date of release Title of product (including link) Type of product Timeliness
Time lapse between collection and release:
Days (years)
Time lapse since previous release: Days
May 10, 2017 The Daily: 2016 Census of Agriculture Statistics Canada's official release bulletin 365
(1 year)
N/A
Farm and Farm Operator Data (CANSIM tables 004-0200 to 004-0246) Data table
Provincial and territorial trends (NL; PE; NS; NB; QC; ON; MB; SK; AB; BC; YT/NT) Analytical product
Reference maps: Provinces Map
May 17, 2017 A portrait of a 21st century agricultural operation Analytical product 367 2
May 24, 2017 Production efficiency and prices drive trends in livestock Analytical product 374 7
May 31, 2017 Seeding decisions harvest opportunities for Canadian farm operators Analytical product 381 7
June 7, 2017 Leveraging technology and market opportunities in a diverse horticulture industry Analytical product 388 7
June 14, 2017 Farmers are adapting to evolving markets Analytical product 395 7
June 21, 2017 Growing opportunity through innovation in agriculture Analytical product 402 7
June 27, 2017 150 Years of Canadian Agriculture Infographic 408 6
September 13, 2017 Agricultural Ecumene Boundary File Boundary file 486 78
November 20, 2017 Canadian Agriculture at a Glance: Other livestock and poultry in Canada Analytical product 554
(~1.5 years)
68
December 6, 2017 Canadian Agriculture at a Glance: Dairy goats in Ontario: a growing industry Analytical product 570 16
December 11, 2017 Selected Historical Data from the Census of Agriculture (CANSIM Tables 004-0001 to 004-0017)   Data table 575 5
December 13, 2017 Agricultural operation characteristics Map 577 2
January 25, 2018 Land use, land tenure and management practices Map 620 43
February 22, 2018 Crops - Hay and field crops Map 648 28
March 22, 2018 Canadian Agriculture at a Glance: Innovation and healthy living propel growth in certain other crops Analytical product 676 28
April 5, 2018 Crops - Vegetables (excluding greenhouse vegetables), fruits, berries and nuts, greenhouse products and other crops Map 690 14
April 26, 2018 Livestock, poultry, bees and characteristics of farm operators Map 711
(~2 years)
21
November 27, 2018 The Daily: The socioeconomic portrait of Canada's evolving farm population, 2016 Statistics Canada's official release bulletin 926
(~2.5 years)
215
Agriculture-Population Linkage Data (The socioeconomic portrait of Canada's evolving farm population, 2016) (13 Data Tables) Data table
Socioeconomic overview of the farm population - The Agriculture Stats Hub Dynamic web application (Agriculture-Population Data Linkage)
Canadian farm operators: An educational portrait Infographic
The socioeconomic portrait of Canada's evolving farm population Infographic
Canada's immigrant farm population Infographic
December 13, 2018 Canadian Agriculture at a Glance: Female and young farm operators represent a new era of Canadian farmers Analytical product 942 16
January 17, 2019 Canadian Agriculture at a Glance: Aboriginal peoples and agriculture in 2016: A portrait Analytical product 977 35
March 21, 2019 Canadian Agriculture at a Glance: The educational advancement of Canadian farm operators Analytical product 1040
(~3 years)
63
July 3, 2019 Canadian Agriculture at a Glance: The changing face of the immigrant farm operator Analytical product 1144
(~3 years)
104

Appendix C: Governance and management structures (Census of Agriculture and the Integrated Business Statistics Program)

Overall management of the Census of Agriculture:

  • CEAG Working Group (WG) for overall management of the CEAG (monthly meetings): chaired by the Assistant Director (AD) and Chief, and includes Chiefs from other relevant areas (e.g., methodology, IT and unit heads)
  • CEAG Management Team for day-to-day management of the CEAG (weekly meetings): includes the same members as the CEAG WG, but is also extended to other staff involved
  • CEAG Steering Committee:Footnote 23 an overarching advisory and decision-making function (monthly meetings)
  • Other WGs and committees for various functions (e.g., Census of Population/CEAG WG, Collection WG, Advisory Committee on Agriculture and Agri-Food Statistics, Federal-Provincial-Territorial Committee on Agriculture Statistics.)

Governance structures already established within the Enterprise Statistics Division (ESD) for the IBSP:

  • IBSP Transition/Integration/Production WGs: chaired by ESD, and includes the CEAG and all other partners such as the Operations and Integration Division (OID), the Collection Planning and Research Division (CPRD), as well as methodology and IT (bi-weekly meetings), to support the transition, integration, and production phases;
  • IBSP Project Management Team:Footnote 24 an overarching advisory and decision-making function that includes directors general, directors and ADs involved in the IBSP migrations
  • Change Management Committee: involved only during the production phase, it will be responsible for overseeing change management during production (e.g., if the schedule needs to be changed, the Committee will triage the request to the different stakeholders involved.)

Appendix D: Innovation Maturity Survey

In 2018, Statistics Canada conducted a survey to measure the innovation maturity level of the agency across 6 attributes:Footnote 25

  • Client expectations - incorporating the expectations and needs of clients in the design and development of innovative services and policies
  • Strategic alignment - articulating clear innovation strategies that are aligned with the organization's priorities and mandate
  • Internal activities - building the right capabilities aligned with the innovation strategies
  • External activities - collaborating across the whole of government and with external partners to co-innovate policies, services and programs
  • Organization - fostering the right organizational elements to drive innovation performance at optimal cost
  • Culture - aligning the innovation goals, cultural attributes, and behaviours with the innovation strategies

The Agriculture Division had maturity levels higher than those for all of Statistics Canada and compared with the Economic Statistics Field as a whole.

Figure 3: Results from the Innovation Maturity Survey (5 point scale)Footnote 26
Figure 3: Results from the Innovation Maturity Survey (5 point scale)
Description for Figure 3 - Results from the Innovation Maturity Survey (5 point scale)

The figure depicts the results of Statistics Canada Innovation Maturity Survey level for 4 different groups (Statistics Canada; Economic Statistics Field; Agriculture, Energy and Environment Statistics Branch; and, Agriculture Division. Six different attributes were used: Client expectations; Strategic alignment; Internal activities; External activities; Organization; and, Culture. Overall maturity was also assessed.

Results from the Innovation Maturity Survey (5 point scale)
Attribute Statistics Canada Economic Statistics Field Agriculture, Energy and Environment Statistics Branch Agriculture Division
Overall maturity 1.98 2.01 2.15 2.35
Client Expectations 2.16 2.34 2.55 2.84
Strategic Alignment 1.98 1.95 2.03 2.52
Internal activities 2.11 2.08 2.22 2.38
External activities 1.47 1.57 1.73 1.66
Organization 1.90 1.90 1.97 2.11
Culture 2.26 2.24 2.40 2.56