Best practices
Best practices for implementing with PROCESIO
32 min
a) general principles & know how 1\) frame the work start with outcomes one measurable kpi per process (e g , βreduce onboarding cycle time by 30%β) map the as is β to be capture triggers, systems, data contracts, slas, exceptions define raci (responsible | accountable | consulted| informed) early business owner, builder(s), reviewer, approver, operator quick procesio flavored example automating βcrm customer upsertβ via procesio activity product owner procesio builder reviewer/qa security/it sales ops define kpi & scope a i c i c build process & connectors i r c c i data mapping & validation rules c r c i a uat & sign off a r c i c prod release & monitoring a r c c i 2\) solution design modularize one main process + small reusable subprocesses (auth, paging, validation, notifications) | think in terms of trade offs too many sub processes might result in harder debugging, extra execution time no sub processes result in a monolithic approach (hard debugging, but also potential for re executing unnecessary steps) stable contracts standardize input/output with dto style variables; avoid passing raw api payloads between steps idempotency design so a retry wonβt create duplicates (use external ids, dedupe keys) accidentally triggering the process twice should not be impactful on the records think about using error ports for allowing error capture and error logic handling stateless core keep state in systems of record; use procesio variables only for transient orchestration build your procesio processes so they donβt need to remember anything between runs anything you must remember across runs (checkpoints, last sync time, job status, idempotency keys, approvals, etc ) should live in a durable system of record βthe crm/erp/ticketing tool itself, or a database you control use procesio variables only inside a single execution (to pass data between steps, hold tokens, page cursors, correlation ids) theyβre ephemeral by design perceived speed for users it is most important for users to feel they are using a technology that helps them and makes them progress in their daily jobs (not the other way around), and for this, they need to have and use ui that is fast and they do not waste time on waiting for βprocesioβ to respond this will greatly negatively impact the perception of our clients concerning procesio this is because users cannot possibly know why the procesio forms are so unresponsive; it must be because of procesio we must not allow this situation to ever happen! secrets management do not keep secrets in processes! keep them in the credential manager parameters handling when we have parameters that we have in multiple processes and that at some point can change (most times all need to change at some point), those parameters need to be set in a single place (e g , in one process) and then shared everywhere, so once you change the parameter in one place you do not have to change it everywhere but it applies automatically bad example of how not to do it this approach of not centralizing parameters (and writing the secrets in plain hardcoded text) will always lead to problem, and future effort 3\) integrations (1β2 days for first system) wrap each external api in a reusable subprocess auth get token \<system> get by id, \<system> search, \<system> upsert pagination helper and rate limit handler implement exponential backoff, idempotency keys, and a standard error object retires at different growing intervals attempt β wait examples (Β± jitter) 1 β 5s 2 β 10s 3 β 30s 4 β 1min idempotency keys goal the same logical action runs onceβeven if your process retries or is triggered twice for this purpose, use idempotency keys (dedupe/once only) 4\) data contracts (ongoing) define dto variables (e g , customer, orderline, address) re use data model (dm) children of the same dm, where appropriate 5\) robustness add global try/catch paths via error port + decisional action define appropriate error handling logic within procesio retryable retry(max=3, backoff=2^n) business send to queue + notify owner hard fail alert, capture context, stop 6\) observability generate a correlation id at start; pass to all logs/requests store the process instance id of the main process when appropriate, set a specific status for it (success/fail, other custom status) add alerts on thresholds (if something is repetitively failing) 7\) document & handover write documentation on each process, think of mapping the order in which processes are triggered (visual map if necessary) think in terms of sops (standard operating procedures), so that someone not knowing what happens should be able to debug or follow the logic of the process b) particular component usage 1\) sql server actions the following image depicts best practices in using the sql server action and how to design the sql statements 1\) always use the action parameters to inject data into the sql statement centralize all parameters at the beginning of the sql statement, so it makes it easier to visualize which are the injected values a few notes on how to use parameters a) the parameters are always on the right hand side of an operation sql //good practice select from invoices as i where i invoiceid = @param invoiceid //good practice set @sqlvariable = @parameter \========================================================================================= //bad practice select , @param invoiceid / you will get error doing this / from invoices as i where i invoiceid = @param invoiceid //bad practice set @parameter = @sqlvariable b) the parameters can have the same name as the names of the internal variables of the sql statement the compiler safely knows where to replace the parameters c) use parameters only once in the statement, especially if your parameter names are the same as the internal variable names of the sql statement 2\) after the sql parameter injection in the sql statement, perform a cleaning operation on the sql variables that hold the data coming from the sql parameters, to make sure that the rest of the sql statement performs as expected note the sql execution engine optimizes the execution plan, but there are many scenarios in which the execution engine cannot determine the best execution plan, so it does not optimize it and executes the statement exactly as it was written so, the best course of action is to always write the sql statements in itβs most effective form and this is the scope of the following best practices in writing sql statements 3\) the sql statement, if it is a select statement (though the following rules apply nicely to delete and update statements as well), makes sure that the from and joins are done following the best practices a) select data first from the table with the smallest record set, and the last one should be the one with the largest record set see examples good practice select from invoices inner join invoicearticles bad practice select from invoicearticles inner join invoices b) the join conditions should be applied in order from the most restrictive condition to the least restrictive one see examples good practice select from invoices as i inner join invoicearticles as ia on ia invoiceid = i invoiceid and ia articlestatus = 1 bad practice select from invoices as i inner join invoicearticles as ia on ia articlestatus = 1 and ia invoiceid = i invoiceid c) the order of inner and left joins is important as well this is an extension of the rule above (3 b), which is easy to miss, and this is why we mention it separately as well when possible, use first inner and then left joins see examples good practice select from invoices as i 	/ this join limits the number of remaining rows that the following join will have / inner join invoicearticles as ia on ia invoiceid = i invoiceid left join stockholders as sh on sh invoiceid = i invoiceid bad practice select from invoices as i 	/ this join does not limit the number of remaining rows that the following join will have / left join stockholders as sh on sh invoiceid = i invoiceid inner join invoicearticles as ia on ia invoiceid = i invoiceid d) limit the number of joins in an sql statement to 5 maximum 6 joins! this is important for creating an optimal execution plan for the sql statement because the sql execution engine must calculate the possible paths of execution for instance, 5! = 120, 6! = 720, 7! = 5040, and 8! = 40320 as you can see, adding more joins to the statement makes the execution engine's task of determining an optimum execution plan exponentially harder if it cannot determine an optimum execution plan, it will default to not optimizing the plan therefore, it is very important how you write the statement from the start (there are other reasons for which an optimum execution plan cannot be determined) 4\) the where section of an sql statement should respect the following best practices a) the where conditions should be applied in order from the most restrictive condition to the least restrictive one see examples good practice select from invoices as i where i invoiceid = 123 	and i issuedate >= '2025 04 11' bad practice select from invoices as i where i issuedate >= '2025 04 11' 	and i invoiceid = 123 b) use only pre calculated values as much as possible, instead of calculating values on the spot this is a more general rule! see examples good practice set @var = case when isnull(@var, '') = '' then null else @var end set @calcfixed = case when @var is null then 1 else 2 end select , 	case when @calcfixed = 1 then i col1 else i col2 end as calccol from invoices as i where i column = @variable bad practice set @var = case when isnull(@var, '') = '' then null else @var end set @calcfixed = case when @var is null then 1 else 2 end select , 	/ notice that isnull function will be used for every result row / 	case when isnull(@var, '') = '' then i col1 else i col2 end as calccol from invoices as i where i column = case when isnull(@var, '') = '' then null else @var end / notice that for every row to be evaluated the case condition and the isnull function will need to be evaluated as well the execution engine does not precalculate them and then applies the result for every row, but instead it evaluates it for every row bad for performance! / 5\) when applying filters in the where section of the sql statement, respect the following best practices make sure you clean the filtering parameters before applying them to the filter see point 2 above make sure that each condition activates only if there is a filter set for that condition good practice declare @refdec nvarchar(max) = @refdec, 	 @uit nvarchar(50) = @uit \ prepare variable set @refdec = case when isnull(@refdec,'') = '' then null else @refdec end set @uit = case when isnull(@uit,'') = '' then null else @uit end select from dbo uitcodes as uc 	left join dbo vehicleplates as vp on uc uit = vp uit where (@refdec is null or (@refdec is not null and vp referintadeclarant like '%' + @refdec + '%') ) and / this type of condition takes effect only when there is a filter applied / (@uit is null or (@uit is not null and uc uit like '%' + @uit + '%') ) bad practice declare @refdec nvarchar(max) = @refdec, 	 @uit nvarchar(50) = @uit select from dbo uitcodes as uc 	left join dbo vehicleplates as vp on uc uit = vp uit where (vp referintadeclarant like case when @refdec = '' or @refdec is null then '%' else '%' + @refdec + '%' end) and / this type of condition always takes effect and hits the performance of the sql statement / (uc uit like case when @uit = '' or @uit is null then '%' else '%' + @uit + '%' end) 2\) procesio forms & tasks the following best practice principles should always be followed when building forms with procesio outcome related always think about ux (user experience) and make sure it is best the user should not wait more than 1s on 90% of the clicks they can do on the built ui for this, follow the following rules limit the calls made to processes as much as possible there are enough features in the forms so that most of the calls made to processes can be avoided when calling a process, make sure the process has a minimum number of actions (ideally 1, still good 3, already bad 5, really bad >5) and that it is highly optimized for speed see the process optimize for speed section in this document limit the usage of loading screens triggered by the user action (e g , click) when possible, use loading visuals only for the sections that are affected by the user action implement pagination & filtering when tables with a potentially large number of rows are used the user should not wait more than 1 3s on more than 10% of the clicks they can do on the built ui for this, apply the point a) rules above, and in conjunction (and only after applying them and still the time is >1s) with them, the following rules for complex long processing tasks, break down the task that takes a longer time into multiple steps so that each does not take longer than 3s, and update visually the ui as each task is finished for each of the resulting sub tasks, apply the point a) rules above this will provide the user with a better experience, as they will perceive the ui as moving faster do the above until there are no more tasks lasting more than 3s, and the tasks lasting >1s are less than 10% of the actions that a user can trigger (e g , by click) the user should not wait more than 3s on any click ever for those situations, an asynchronous mechanism should be implemented, allowing the user to continue their activity and be notified when the task has been completed 3\) procesio optimize for speed in the following section, we will show good and bad implementation examples and how to design your processes for speed every procesio action takes 50ms to execute some actions natively take longer (such as the βdecisionalβ action) and others that natively take a shorter time (such as the numerical βaddβ action) an action takes longer to execute for the following reasons we parse/inject into it a large volume of data (a variable containing large data) we interact with external systems that take longer to respond (call api, sql, ftp) so, to optimize the processing speed of a process, we need to understand the above and acknowledge that the more actions a process has, the longer it is going to run the longer an external system takes to respond, the longer the process is going to run so, to optimize the processing speed of a process, we need to address the following reduce the number of actions used at all costs use the actions as effectively as possible (e g , writing code in the scripting actions) optimize what you are doing in external systems, for example sql actions improve the performance of your sql statements and of the database call api & ftp actions use them only where necessary if the speed of the process execution is crucial (e g , when you call the process from a form and do not want the user to wait) below, we will see a bad example in terms of performance and then how to optimize it bad performance example (it takes 2s to go through the 5 decisional actions) optimized version of the process that takes <200 ms to execute, done with a single decisional action an even more optimized (best) version of the above could be to use a single (e g , nodejs) scripting action that does this by itself another example of bad usage is for extracting data from json objects that leads to overcomplicated process logic, overuse of actions, which in turn leads to poor performance and bad design overall