The Ultimate Guide to Flow Best Practices and Standards :

The Ultimate Guide to Flow Best Practices and Standards
by:
blow post content copied from  Salesforce Admins
click here to view original post


There’s no way around it: Salesforce Flow is the automation tool of the future. Flow is not just an ‘admin tool’ — it’s the holy grail of declarative development that unites developers AND admins by allowing the use of Lightning Web Components (LWC) and Apex, and letting the admin orchestrate all of it in one place. We’re starting to see a unique collaboration between admins and developers, with both sides learning a little something about Development and Administration.

In this blog, we’ll discuss best practices, ‘gotchas,’ and design tips to make sure your flows scale with your organization.

1. Document your flows!

Documenting your flow allows the next person, or the forgetful future version of yourself, to understand the overall flow’s objective. Flow designers don’t create solutions out of thin air — we need a business case to solve hard problems, and having those breadcrumbs is critical for maintaining the automation long term.

Outline the flow’s purpose

Fill in that description field! Which problem is your flow solving? Be sure to include what your flow does, the objects your flow touches, and where it’s invoked from (such as which page layout to use if it’s a screen flow, which Process Builder process to use if it’s an Autolaunched flow, etc.). Even better, if you can talk about or mention where this flow hooks into the business process and which groups it touches so the next person can go to them with questions. Have a JIRA or Story ID to link it to a Story? Stick it in the description!

Ensure consistent naming across elements and variables

Stick to naming conventions when creating variables and elements in Flow. Include in the variable description what you’re capturing. A little bit of work up front will go a long way for future ‘you’ or somebody else that inherits the flow. There’s no right or wrong way to do this; just keep it consistent within the flow. One popular naming convention is called Camel Case. Check out this nifty Wiki article from the Salesforce Exchange Discord Server on suggestions for flow naming.

Document every step

Make sure you write short blurbs in each step and talk through what each Flow element is and its purpose. This will ensure any member of the team can pick up the work if needed. This is especially critical when you’re using a workaround to address a Flow limitation, performing a more advanced function, or calling an Apex invocable.

2. Harness the power of invoked actions

Clean up inefficient flows with invoked actions — don’t be scared of using some reusable code to make nice, clean, presentable flows. In the old days, you could invoke Apex from Flow, but you pretty much had to use it for that object or datatype. Those days are gone with Flow now supporting generic inputs and outputs. Generic code used in invocable actions will amplify your Flow capabilities across the board — use one action for as many flows as you want!

Image showcasing how invoked actions can clean up inefficient flow.

Flow is fantastic but has its limitations, especially around queries, large data volumes, and working with collections. Find yourself creating loops upon loops and then more nested loops, or hitting element execution limits? It’s time for reusable Apex to do some heavy lifting for you.

Note: There are two great repositories out there for Flow-invoked actions: the Automation Component Library and UnofficialSF.

3. Utilize subflows for cleaner, reusable, scalable ways to manage flows

Take this flow as an example of when you should start asking yourself, “Should I use a subflow?”

Image of an inefficient flow that could use a subflow.

Here are some classic use cases for when you should consider a subflow:

  • Re-use: If you’re doing the same thing in your flow multiple times, or doing the same thing you did with another flow, call a subflow to do it for you. The development world calls these ‘Helper’ classes.
  • Complex processes/subprocesses: If your flow involves multiple processes and branching logic, make use of a main flow that launches other secondary flows. Example:
    • ‘Manage Contact Data’ — Main screen flow that launches various disconnected processes:
      • Associate Contact to Companies
      • Check Contact Data
      • Manage Contact’s Associations
  • Organizational chaos: If your flow looks like the one above, you probably need a subflow (or many) just to understand how everything connects and to make sense of the bigger process.
  • Permission handling: Let’s say you have a screen flow running in user context but you need to grab some system object that the user doesn’t have access to. Use a subflow! By using a subflow with elevated permissions, you’re able to temporarily grant that user what they need to continue the flow.

Benefits of subflows:

  • Make changes once instead of in 10 different places.
  • Take advantage of clean, concise, and more organized flows.
  • Maintain a single place for org-wide behaviors, like sending a consistent error email or showing the same error screen across flows (when passing in the $Flow.FaultMessage variable).

Considerations with subflows:

  • Not supported in Record-Triggered flows yet.
  • Requires more collaboration and testing between groups if changes need to be made to the subflow.
  • Debugging can be tricky with subflows. When you start using Lightning components and Apex actions, you don’t always get detailed errors if the error occurred in a subflow.
  • There is some UX overhead associated with going overboard with subflows — don’t go making too many.

4. Don’t overload flows with hard-coded logic

Logic abstraction

A great way to slow down your development process and reduce your team’s agility is hard-coding all of your logic from within flows. When possible, you should store your logic in one place so that other automation tools like Apex, Validation Rules, and other flows can also benefit. Per this great presentation in 2018 on the Salesforce Admin YouTube Channel, you should consider using Custom Metadata, Custom Settings, or Custom Labels in your flows in the following scenarios:

  • Consolidate application or organization data that is referenced in multiple places.
  • Manage mappings (for example, mapping a state to a tax rate or mapping a record type to a queue).
  • Manage information that is subject to change or will change frequently.
  • Store frequently used text such as task descriptions, Chatter descriptions, or notification content.
  • Store environment variables (URLs or values specific to each Salesforce environment).

You can use things like Custom Labels if you want to store simple values like ‘X Days’, record owner IDs, or values you expect might change in the future.

To give you an idea of how much cleaner Custom Metadata-driven flows are, take a look at the before and after of this After-Save Case flow that maps record types with queues. The solution utilized Custom Metadata records that store a Queue Developer Name, a RecordType Developer Name, and the Department field to update on the case dynamically.

Before Custom Metadata-driven logic

Image showcasing what a flow looks like before Custom Metadata-driven logic

After Custom Metadata-driven logic

Image showcasing what a flow looks like after Custom Metadata-driven logic

Data-driven screen flows

If you find your team is building 20 to 30 (or more) screens with constantly changing logic and content, then you may need a dynamic screen design pattern based on record data.

Check out this great article on UnofficialSF written by Alex Edelstein, VP of Product at Salesforce, on how you can build a 500 screen-equivalent flow using only custom object records (DiagnosticNode/DiagnosticOutcome).

Other helpful links on Flow and business logic:

5. Don’t make these common ‘builder’ mistakes

Check for nulls/empty collections

Flow is essentially declarative coding, which means the guard rails are off! You need to plan for every scenario when building your flow. This means planning for cases where what you’re looking for might not exist!

Always have a decision after a Lookup/Get element to check for no results if you plan on using the results later in your flow. Directly after the Lookup, add an ‘Is null EQUALS False’ decision check for the variable created in the Get element. If you’re a coder, imagine this is your ‘null’ check.

Empty Collections: Some invocable actions or screen components will output an ‘empty’ collection which is not considered ‘null’. A classic example of this is the out-of-the-box ‘File Upload’ component which will return an empty text collection if nothing is uploaded. If you encounter this, the easiest way to do a decision here is to assign the collection to a number using an Assignment element and make your decision off the number value.

Don’t hard-code IDs

Flow does not yet let you reference the Developer Name of things like Record Type, Queues, or Public Groups in certain parts of the UI, but that doesn’t mean you should be hard-coding the ID.

Instead, use the results of a Get element, a Custom Label, or Custom Metadata. This will save you headaches when deploying through your environments, since Record Type IDs and other unique identifiers might differ between environments.

As an example, create a Get Records lookup step on the RecordType object. Then, in your conditions, provide the DeveloperName and Object field, and store the found Record ID (Record Type ID) for later use in your flow.

Need to reference a queue or a public group? Do a ‘Get’ using the DeveloperName of the queue on the Group object instead of hard-coding the ID.

Learn to get comfortable with the rich documentation Salesforce provides around setup objects like Group, RecordType, and ContentDocumentLink. Understanding the Salesforce data model will make you an infinitely more powerful administrator and Flow designer.

Take care when looping

There are three main concerns when looping: element limits, SOQL limits, and using complex formulas.

1. Beware of the executed element limit — Every time Flow hits an element, this counts as an element execution. As of Spring ’21, there are currently 2,000 element executions allowed per flow interview. Consider a scenario where you are looping more than 1,500 contacts.

Within your loop, you have your loop element (1), a decision element (2), an assignment step to set some variables in your looped record (3), and a step in which you add that record to a collection to update, create, or delete later in the flow (4). Each of those four elements within the loop will count toward your execution limit. This means your loop over 1,500 records will have 6,000 executed elements, which will far exceed the iteration limit.

When approaching this limit, you’re likely going to need to either get creative with forcing a ‘stop’ in the transaction or by making your flow more efficient by using invoked actions. Keep in mind that the transaction ends when a Pause element is hit or, in screen flows, when a screen/local action is shown.

2. Do not put DML inside of a loop (Get, Update, Delete, Create) unless you’re 100% sure the scope will not trigger any governor limits. Usually, this is only the case in screen flows when you’re looping over a small subset of records. Instead, use invoked Apex if you need to do complicated logic over a collection.

3. Be careful with complex formula variables — Per the Record-Triggered Automation Decision Guide, “Flow’s formula engine sporadically exhibits poor performance when resolving extremely complex formulas. This issue is exacerbated in batch use cases because formulas are currently both compiled and resolved serially during runtime. We’re actively assessing batch-friendly formula compilation options, but formula resolution will always be serial. We have not yet identified the root cause of the poor formula resolution performance.”

Create fault paths

Before you build your flow, think about what should happen if an error occurs. Who should be notified? Should it generate a log record?

One of the most common mistakes an up-and-coming Flow designer makes is not building in fault paths. A fault path is designed to handle when a flow encounters an error, and you then tell it what it should do — think of it as exception handling. The two most common uses are showing an error screen for screen-based flows or sending an email alert containing the error message to a group of people.

I typically like passing off errors to subflows that handle errors for me; that way, if I ever need to change an aspect of the error path, I only need to do it in one place for all of my flows.

6. Mind the flow ‘context’ in screen flows

Always pay attention to the context your flow runs under when using a screen flow. If you’re making a screen flow run under User Context, don’t go creating a Get element on an object that the user cannot see, like admin-specific fields on the User object or setup objects.

Test as your target user base and users not in your target audience. You don’t want users to have a rough experience in a flow that wasn’t meant for them. Utilize things like Flow-specific permissions or Custom Permissions assigned via Permission Sets to control flow access if you need granular control within your flows. Need to control access to the whole flow? Use Flow permissions. Need to control access to specific pieces of a single flow? Use a Custom Permission assigned through a Permission Set.

If you truly need to grab system-level fields or records (like Custom Metadata records), use subflows with elevated ‘System’ context permissions to perform key tasks that you can call upon across flows.

For enterprise-grade implementations, consider using a comprehensive logging strategy in which flows log errors in the same place as your Apex code. You can use a tool like Nebula Logger to write to a custom Log object when your flow fails, or just have an elevated-permission subflow create log records for you if you don’t need anything fancy.

7. Try not to mix Apex, Process Builders, Workflow Rules, and Record-Triggered flows

Every object should have an automation strategy based on the needs of the business and the Salesforce team supporting it. In general, you should choose one automation tool per object. One of the many inevitable scenarios of older orgs is to have Apex triggers mix in with Autolaunched flows/processes or, more recently, Process Builders mixed in with Record-Triggered flows. This can lead to a variety of issues:

  1. Poor performance
  2. Unexpected results due to the inability to control the order of operations in the ‘stack’
  3. Increase in technical debt with admins/devs not collaborating
  4. Documentation debt

One common approach is to separate DML activity and offload it to Apex, and let declarative tools handle non-DML activities like email alerts and in-app alerts — just be careful and ensure none of your Apex conflicts. We’re still very much in the ‘wild wild west’ phase of Record-Triggered flows as architects and admins figure out the best way to incorporate them into their systems.

I’ve seen some more adventurous orgs use Trigger Handlers to trigger Autolaunched flows (see Mitch Spano’s framework as an example). This is a fantastic way to get both admins and developers to collaborate.

Here’s a great article on the pitfalls of mixing various automations by Mehdi Maujood.

Keep the amount of flows to a minimum

Again, do not implement After-Save flows if you already have lots of active Apex or Process Builders on an object! Create a migration strategy or consider waiting for subflow support before moving your main objects (Accounts, Cases, Contacts) over to Record-Triggered flows.

Keep the amount of Record-Triggered flows per object to a minimum. While there isn’t really a golden rule, a good rule of thumb is to separate the flows by either business function or role so that there is little to no chance of a conflict in the order of execution. Controlling the order of execution by keeping the amount of flows to a minimum ensures a consistent outcome for an event in the system.

Speaking of benchmarks and performance, always use Before-Save flows whenever you’re updating the same record that triggered the automation. As per the Architect’s Guide to Record-Triggered Automation, Before-Save flows are SIGNIFICANTLY faster than After-Save and are nearly as performant as Apex.

Since subflows aren’t supported yet, I generally recommend holding off on moving your core objects over to Record-Triggered flows until we’re able to create cleaner ‘Handler’ flows that utilize subflows for those objects.

8. Build a bypass in your flows for data loads and sandbox seeding

This isn’t a Flow-specific best practice, but it’s a good idea to include a bypass in your triggers and declarative automation. With such a bypass strategy, you can disable any and all automations for an object (or globally) if you ever need to bulk import data. This is a common way to avoid governor limits when seeding a new sandbox or loading large amounts of data into your organization.

There are many ways of doing this — just be consistent across your automations and ensure your bypasses are all in one place (like a Custom Metadata type or Custom Permission).

9. Understand how scheduled flows affect governor limits

Selecting your record scope at the start of setting up a scheduled flow will have huge governor limit ramifications for said flow. When we say ‘setting up the scope,’ we refer to this screen where we select the sObject and Filter conditions:

Option on screen to select the Object and Filter conditions.

When specifying the record scope in the Flow Start (above)

One flow interview is created for each record retrieved by the scheduled flow’s query.

The maximum number of scheduled flow interviews per 24 hours is 250,000, or the number of user licenses in your org multiplied by 200, whichever is greater.

This means that you cannot act on 250,000 or more records (or whatever the limit is based on user license) per 24 hours using the above method. If you need to work with more, we recommend going the route of an invocable action and not specifying your scope here. You may need to look into Batch Apex and Asynchronous processing — ask a developer for help in these scenarios.

Additionally, the Flow engine also packages the records in batches of 200 to ease governor limits. This means that 200 records = 1 transaction but 200 flow interviews. So if you have 800 records identified in your initial scope, you will be using 800 flow interviews and 4 transactions.

Keep in mind that although the flow is bulkified, the flow iteration limit will still take effect. If a record defined in your scope exceeds the 2,000 element iteration limit, you will get errors. Be extremely careful when using invoked Apex here as well, as it can be difficult to write correctly bulkified invocable actions in Apex.

When specifying the scope within the flow (Invoked Action, Get Records)

Limits will be more aligned with what you’re used to with a single flow, meaning one interview will be created for the flow instead of one per record. If you’re going this route, do not specify a scope for the same set of records (re: above screenshot)! If you do, the flow will run for N² records, hitting limits quickly.

Go this route when you need to have more control over limits and you want to invoke Apex that might involve SOQL or complex processing.

In this scenario, if you have an initial ‘Get Records’ that returns 800 records, Flow will not try and batch those records. Keep in mind the running user is still an Automated Process user, so that comes with certain quirks like not being able to view all CollaborationGroups (Chatter groups) or Libraries.

Double Dipping: Again, DO NOT select a record scope and perform a Get Records step for the same set of records; you will effectively be multiplying the amount of work your flow has to do by N² and you will hit limits quickly.

Which path do I choose?

There is no right or wrong answer as to which path to choose. There are fewer user-controlled ways of controlling limits associated with the first route — you cannot specify your batch size or total records in the scope. So, if you need stricter control around limits, it might be best to create an invocable and specify your scope that way. Or, just don’t do this in a scheduled flow — use a scheduled job with Apex.

For more on scheduled flow considerations, check out the official documentation: Schedule-Triggered Flow Considerations.

10. Consult the checklist!

You’re ready to go build great flows! Here’s a handy checklist that applies to every flow:

1. Documented elements and descriptions: Make sure your flow has a solid description, a decent naming convention for variables, and descriptions for Flow elements that might not make sense if you revisit them in 6 months.

2. Null/Empty checks: Don’t forget to check for Null or Empty results in Decision elements before acting on a set of records. Don’t assume a happy path — think of all possible scenarios!

3. Hard-coded IDs: Don’t hard-code IDs for things like Owner IDs, Queue IDs, Groups, or Record Type Ids. Do a ‘Get’ for the respective object using the DeveloperName.

4. Hard-coded logic: Don’t hard-code reference logic in a decision that could change frequently such as Queue IDs, OwnerIDs, or numbers (like a discount percentage). Utilize Custom Labels and Custom Metadata!

5. Excessive nested loops & ‘hacks’: Are you stretching Flow’s performance to its limit when code could be better suited? Use generic Apex invocables that your developers build, or utilize components from the Automation Component library or UnofficialSF instead.

6. Looping over large data volumes: Don’t loop over large collections of records that could trigger the Flow element limit (currently 2,000).

7. Check record and field security & flow context: Don’t assume your user can see and do everything you designed if you’re building a screen flow. Test your flows as both your intended and unintended audiences.

8. DML in a loop: Don’t DML inside of a loop in an Autolaunched flow or Record-Triggered flow. Screen flows are okay if you’re 100% sure the scope won’t trigger governor limits.

9. Flow errors: What do you want to happen when the flow hits an error? Who should be notified? Use those fault paths!

10. Automation bypass: Does your Autolaunched or Record-Triggered flow have a Custom Setting/Custom Metadata-based bypass in place?

Conclusion

If you’ve made it this far, congratulations! You’re well on your way to being a pro Flow builder, and you aren’t alone! Join the Salesforce Automation Trailblazer Community and connect with other Salesforce Admins all over the world. This community is a great place to learn more about the flows that other admins are building, hear the latest updates from Product Owners, and ask questions about Flow. I hope you found this guide helpful, and I can’t wait to see all of the flows you build!

Be sure to check out my last Flow-related post on the Architect blog as well.

Resources

The post The Ultimate Guide to Flow Best Practices and Standards appeared first on Salesforce Admins.


June 16, 2021 at 09:30PM
Click here for more details...

=============================
The original post is available in Salesforce Admins by
this post has been published as it is through automation. Automation script brings all the top bloggers post under a single umbrella.
The purpose of this blog, Follow the top Salesforce bloggers and collect all blogs in a single place through automation.
============================

Salesforce