Skip navigation
All Places > Developer > Sugar Integrate > Blog > 2020 > July
2020

Over the course of the next 4 weeks Sugar Integrate will be undertaking security enhancements that will benefit all users of the platform. A direct impact of these changes is that on or before these deadlines all user accounts must update their passwords to comply with our updated more strict password requirements. This requirement to update user passwords will have no current impact on using the Platform APIs or existing Authorization Header tokens, but if passwords are not reset by these deadlines then users will be locked out of the Sugar Integrate user interface until they reset their passwords with the assistance of the Support team.

 

Required Action:

All basic auth user passwords (e.g. User + Password, non-SSO users) must be updated to have Excellent UI passwords that comply with the following standards:

  • at least 10 characters
  • at least 3 of the following 4 types of characters:
    • a lower-case letter
    • an upper-case letter
    • a number
    • a special character (such as !@#$%^&*)
  • Not more than 2 identical characters in a row (such as 111 is not allowed)

 

How to Reset a Password in Sugar Integrate

Reset all non-Excellent user passwords either by logging into the Sugar Integrate UI to reset (Reference Link) or by performing a call to POST /authentication/passwords (Reference Link) with the users’ auth headers. The third method of resetting your password is by navigating to the reset password screen for the appropriate environment. Here are those links:

 

Note that users who already have Excellent passwords will not need to take any action. Note that users with weak passwords who do not update their passwords by the specified deadlines will be unable to login to the UI and will need to go through the “Forgot Password” flow to regain access to their account.

 

Deadlines:

Staging - August 14, 2020

Production - August 28, 2020

A common request we hear is for a solution to store files that are attached to records in an external location. For example: "How can I store uploaded documents in Box.com and include a reference to the file in Sugar?" I have created a high-level example that I'd like to show you.

 

Let's talk about the proposed solution. The first thing we'll do is poll for new events on the Documents module in Sugar - for this exercise I will only deal with Documents and Opportunities but most solutions will benefit from expanding this to other module types. When a new Document is added to Sugar Sell, we will:

  1. Verify it is of Document type
  2. Retrieve the details of the Document (like name, filename, etc)
  3. Retrieve the details of the Document's parent (we're only checking Opportunities for this exercise)
  4. Create a directory in Box.com using the name of the parent Opportunity as the directory's name. Note I was able to switch this to Google Drive with about 10 minutes of changes (really just needed to authenticate the Google Drive adapter and change the base URL for the final link from app.box.com/file/{id}
  5. Copy the file from the Sugar Sell Document into the newly created directory in Box.com
    1. We will use the same filename
    2. If the file already exists, overwrite the existing file (Box.com has versioning so the overwrite is less of a dangerous operation)
  6. Generate a direct URL to the file in Box.com
  7. Write the URL to a custom field in the Sugar Sell Opportunity record
  8. Delete the original Document record from Sugar Sell

 

Setup

To begin, we must do some configuration in Sugar Sell. First we need to add a custom textarea field to the Opportunities module via Studio. A generic textarea field is perfect for this exercise - we can append each new file URL to the existing value in the field and use "\r\n" to create a new line each time. PLUS, in v10.1, we added URL linking in textareas. So, the links will be functional without adding any additional HTML.

 

In Studio in Sugar's admin screen, add this new textarea field to the Opportunities module (see: Creating Fields support document). I called mine externaldoclink_c. After creating the field, remember to add it to the record view layout so that we can see it when viewing an Opportunity.

 

While you are in Sugar Sell admin, there are a few other things that we nearly always recommend when creating an integration with a Sugar product. All of these will help avoid authentication conflicts between logged-in instances of Sugar and the REST calls from Sugar Integrate:

  • turn off IP Validation in the System Settings section of Sugar Admin. 
  • create a custom platform in the Developer Tools section of Sugar Admin
  • create a custom user that will be used exclusively for API calls

 

Now that Sugar is configured, we can move over to Sugar Integrate. We will need an authenticated instance of the Sugar Sell adapter and one of the Box.com adapter. In the Sugar Sell adapter, be sure to enable eventing and poll for the Documents module - I chose 1 minute intervals for testing but this is probably too often in reality.

 

This solution needed a couple of endpoints that weren't available by default in the Sugar Sell adapter. I have made a request to have them added as I think they are important. For now, however, we will have to add them manually. 

 

The first new endpoint will allow us to get the details of the file like filename, etc. Without this endpoint, we would only know the GUID of the document record. So, we will add a new endpoint called /files/{objectId} that will map to the Sugar endpoint of /Documents/{objectId}/file/filename

 

  1. In Sugar Integrate, navigate to the Resources tab of the Sugar Sell Adapter.

    1. Click on "Add a new Endpoint" on any of the sections of the swagger view

    1. Here, we will enter the details of the new endpoint. Use this screenshot for reference.
      1. Be sure to choose GET as the method for both fields at the top. 
      2. The first field, which represents the Sugar Integrate endpoint, will be set to /files/{objectId}
      3. The second field, which represents the Sugar Sell endpoint, will be set to /Documents/{objectId}/file/filename
      4. Set the Response Content Type field to "application/octet-stream" so that we will receive an actual file and not just JSON metadata describing the file.
      5. Then we need to define the objectId parameter that we put into the endpoints
      6. Click the Save icon in the far right corner of this endpoint section when your settings are complete  

     

     

    The second new endpoint that we need to add is for retrieving the opportunity that this document is associated with. We'll make it generic enough so that it can be used in the future for any type of parent record for the document.

    1. If you are not still in the resources tab of the Sugar Sell adapter, navigate to there now.
    2. Click on "Add a new Endpoint" on any of the sections of the swagger view

    3. Here, we will enter the details of the new endpoint. Use this screenshot for reference.
      1. Be sure to choose GET as the method for both fields at the top. 
      2. The first field, which represents the Sugar Integrate endpoint, will be set to /{objectName}/{objectId}/{childObjectName}. When we call this endpoint, we will replace those parameters like so: /Documents/123e4567-your-GUID-here-abcdef987654/opportunities
      3. The second field, which represents the Sugar Sell endpoint, will be set to /{objectName}/{objectId}/link/{childObjectName}
      4. Leave the Response Content Type field as "application/json" because we want to receive JSON metadata describing the file.
      5. Then we need to define the parameters that we put into the endpoints
        1. objectName - string
        2. objectId - string
        3. childObjectName - string
      6. Click the Save icon in the far right corner of this endpoint section when your settings are complete

     

     

    If you'd like to test these new additions, simply click on your instance of the adapter on the left side panel and then the "Try it out" button will appear on the right of your endpoint panels. You will need a Document Id from Sugar Sell to complete either test. The Id can be found by navigating to the Document in Sugar and copying the GUID from the URL string in the address bar.

     

     

    The Procedure

    Now that everything is configured, it is time to create our procedure. I'm attaching a JSON file with my procedure to this article. You may import that to test the functionality, or continue following along in this exercise to create your own version. Remember to unzip the file before attempting to import it into Sugar Integrate.

     

    This procedure has 2 configuration variables already set up: sourceAdapter and destinationAdapter. So, when you create an instance of the procedure, you will set those values to your authenticated Sugar Sell and Box.com adapters respectively. If you are re-creating this procedure from scratch, remember to create these 2 Adapter Instance type variables.

     

    Whether you are importing the procedure or starting from scratch, it is important to understand the steps. I think I'll explain the steps one by one.

     

    1. Trigger - every procedure has a trigger as its first step. This procedure is using an event-based trigger. We configured our Sugar Sell adapter to poll for Documents, so those events will trigger this procedure when we set it up to use our Sugar Sell adapter instance.
    2. isItDocument - This is a JS Filter type step. In here, I am simply verifying that this is a Document and it is new (not an update or delete). This is fine for a demo, but for a real life scenario, we should write some code and add some steps to handle the other possibilities. All Javascript steps must call done() function to move on. A JS Filter step is expected to send true or false to the done() function. If the object in the trigger is a Document and it is new, we want the procedure to continue running. So, we send true to done(). If it is not a document or it is not a newly created one, we want the procedure to stop running and move to the onFailure step. So, we send false to done().
    3. parseValues - This is a JS Script step that will grab the values from the trigger that we need for subsequent steps. As you can see, I only needed the GUID of the Document from the trigger. So I grabbed that and sent it to the done() function so it will be available in the next step.
    4. getFileDetails - This is an API Adapter step. We are using this to make another call to Sugar Sell so that we can get the details of the file. This is where we will call one of the custom endpoints that we created above. So, the Adapter Instance Variable will be set to our source Adapter config variable. The method of our request will be GET. The API endpoint we are calling is the /documents/{objectId} one that we created. We must replace the {objectId} with the fileID in the previous step by using the dollar sign, curly brackets notation for steps.parseValues.fileID.
    5. getParentThis is another API Adapter step. This one is going to call our other custom endpoint that we created earlier so that we can retrieve the Opportunities record associated with this Document. We will again use the sourceAdapter config variable and set the Method field to GET. The endpoint this time is /Documents/${steps.parseValues.fileID}/opportunities because we want to find opportunities linked to the Document with the fileID from the parseValues step.

    6. createMetadata - This step is a JS Script step. Now we have the file and the parent Opportunity record. The next task is to copy the file to our destination (Box.com) and save the URL to the file in the Opportunity record. To do that, we need to create some parameters for the coming steps. Note the conditional in this step. I am checking to see if there is, in fact, a parent Opportunity for this Document. If not, I don't want to proceed (for this demo). I am also assuming that there is only one associated Opportunity with this Document. This is safe because in my use case, we are only proceeding with the procedure if the Document is new and not already in the system. This additional conditional verifies that it was also created as a Document associated with an Opportunity.
      1. downloadHeaders - JSON object that defines what we want to receive. In our case, we are pulling down a file so application/octet-stream is correct.
      2. uploadHeaders - JSON object that defines what we are going to send in the filestream step. For that, we are using multipart-form-data so we can send the file and additional data
      3. destinationPath - JSON object defining the location of the file on the destination platform. I also set the overwrite parameter to true because I am using Box.com and it has pretty decent versioning. Your use case may differ.
      4. newDirectory - A JSON object defining the path of the file in the destination platform. This is a concatenation of the base directory I wanted to use (offloadedimages) and the name of the parent Opportunity. This way I can see in Box.com all of the files associated with an Opportunity.
      5. parentId - This one probably isn't necessary as we could reference it later via ${steps.getParent.response.body.records[0].id}. But, putting it into a simpler variable and exposing it through the done() function makes it nice and neat to reference later. 


    7. createDirectory This is another API Adapter step. It is probably not necessary as the Stream File step will create a directory if it doesn't already exist. I like to break things in to small chunks and I thought I'd like some error handling around this step. So, I made a separate step to create the directory. Note that I set the Adapter Instance Variable to our destinationAdapter variable (Box.com). The API Method is POST because we are sending data TO the Box.com endpoint. The endpoint we are calling is simply /folders/. For this step, we need to send a request body object. We defined it in the previous step as newDirectory. If we click on the Show Advanced button, we will reveal the Body field. This is where we put our reference to the object from the createMetadata step.

    8. streamTheFile - This is a File Stream step. This is where the magic happens. This single step will download our file from Sugar Sell and then upload it to Box.com. All we have to do is fill in the values.
      1. Download Adapter Instance Variable - the sourceAdapter configuration variable. This says that we will be downloading from Sugar Sell
      2. Download Method - We are simply making a GET call
      3. Download API - This is the endpoint for Sugar Sell that we want to make the GET call to. In our case it is the custom endpoint we created that gives us the file itself with the Document ID we grabbed in the parseValues step: /files/${steps.parseValues.fileID}
      4. Upload Adapter Instance Variable - the destinationAdapter configuration variable. This says that we will be uploading to Box.com
      5. Upload Method - To send the file, we will make a POST call. This endpoint in the Box.com adapter is called /files
      6. Download Headers - the JSON object that we defined in the createMetadata step
      7. Upload Headers - the JSON object that we defined in the createMetadata step
      8. Upload Query - the JSON object with the destinationPath that we defined in the createMetadata step

    9. makeTheExternalLink - A JS Script step. In this step, I am using the response from the Stream File step to create the URL of the file that has been saved in Box.com. For this demo, the custom field is a textarea to which I want to append each new external file URL. The result of this step is a JSON object defining the value to be stored in our custom field in Sugar Sell.

    10. addExternalLinkToCRMThis is an API Adapter step that will update the Opportunity record in Sugar Sell with the new value we created in the previous step. Since we are updating Sugar Sell, our Adapter Instance Variable will be set to the sourceAdapter config variable. We are doing an update so I'm calling the PATCH method of the opportunities/{id} endpoint. Therefore, our endpoint path is /opportunities/${steps.createMetaData.parentId}. The last thing we need is the body of this PATCH call. Since in the previous step we passed only the postBody object to the done() function, we do not have to specify which value to use. It just needs to be set to the result of that step: ${steps.makeExternalLink}.

    11. deleteOriginalDocument - Another Adapter API Request step. This is our final step. Now that we have copied the file over and updated the parent Opportunity with the external link, we have no need for the original Document or file. So, this step will delete it from the DB and file server. Our Adapater Instance Variable is the sourceAdapter because we are removing the file from Sugar Sell. We will make a DELETE call to /documents/{id} and that's it!

    There you have it. That is our procedure. I have a few onFailure steps in there that simply console.log an "All done." message. Typically, this should be more robust but for demo purposes, it is just fine.

     

    I know this is a lot, but taken in small steps it should make perfect sense. Is this the best solution? Not by far, but it does demonstrate the concept. The proper solution I could think of would be to create a copy of the documents module that would allow never store the file on the Sugar side. it would simply offload it to the external storage location. There would also have to be much more robust error handling throughout the procedure. My hope is that this will get you started. If (when) you do create something inspired by this post, please share it in the comments.

    A consistent challenge for integration developers is managing flat file integrations with systems where direct access is not natively provided by existing functionality or permitted for security reasons. In this guide, you'll see how to take an existing CSV file in an SFTP location and upload/stream its content to Sugar products — a necessary step before you can quickly transform the contained data with a common resource transformation.

     

    Overview

    In this example, we'll do the following:

    1. Take a .csv file from an SFTP location (using the SFTP adapter)
      1. This file will contain rows of records made up of appropriate Sugar Integrate field names (for Account records: billing_address_street, industry, email1, etc). Any fields included in the CSV that are not in Sugar or not in the Sugar record type that you are creating will be skipped and not imported.
      2. Here is a CSV with sample data or you could use a data generation utility like mockaroo.com to generate your own. Unzip the file before uploading to your SFTP server - you only need the .csv file.
    2. Upload file data to a CRM system (Sugar Sell)
      1. The data will be parsed so that a record can be created for each line in the csv file. These data will be mapped appropriately in Sugar Sell. When completed, you should have new records in Sugar Sell in the module that you specified.

    Although this tutorial specifically utilizes a source SFTP location and uploads to Sugar Sell, both the source resources and endpoint can be changed or extended to suit your use case. For example, you could select top-level sources from nearly any bulk-supported endpoint and upload the resulting downloaded data to any other endpoint that can take in data from a CSV file.

     

    Prerequisites

    In order to follow along with and perform this use case, you'll need the following:

    • Sugar Integrate account (request a trial account here)
    • Sugar Sell admin account (Sugar Enterprise or Serve will work also)
    • Configured SFTP server and valid credentials
    • Authenticated instances of the SFTP and Sugar Sell adapters
    • A csv file with data to be imported into Sugar Sell on your SFTP server. Remember to actually upload the file to your SFTP server before trying this process! Here is a csv with sample data of Accounts
    • The JSON configuration file of the sample procedure we created for this article

     

    Creating and Configuring the Procedure Instances

    With Sugar Integrate, you can create procedures from scratch, use existing procedures as a starting point for new ones, or import JSON files of an existing procedure. Here, we'll import the JSON you downloaded above and configure the instances to work properly with each other.

     

    Importing JSON to Create the Procedures

    Your existing procedures are available from the Procedures tab on the left-hand side of our platform's interface. Click Build New Procedure and then select Import to upload the JSON for the procedure we're going to use (see our docs for more details about importing JSON).

     

     

    After importing the JSON for the procedure, it's time to create the procedure instances.

     

    Creating the Procedure Instances

    Remember to configure your SFTP and Sugar Sell adapter instances before starting this step. Once you have those instances, navigate to the Procedure catalog, hover over the procedure, and click Create Instance.

     

     

    On the Instance Creation page:

    1. Enter a name for the procedure instance. Here, we use "Instance 1".
    2. Click the plus sign under targetInstance and select your Sugar Sell adapter instance.
    3. Click the plus sign under sourceInstance and select your SFTP adapter instance.

    It should look something like this:

     

     

    After completing the configuration, click Create Instance.

     

    Triggering the Procedure

    Now that the procedure instance is created and configured, we'll manually trigger the procedure. We will do this in the Edit screen for the procedure. Navigate to the Procedure catalog, hover over the procedure, and click Open.

     

     

    On the console that opens, click Try It Out and then click Select Instance.

     

    Select Instance 1 (or the instance with the name you set up above).

     

    Now click Select Trigger on the console.

     

    In the Chooser window, specify the input JSON to use as the trigger to kick off the procedure. Your trigger should contain the path to the .CSV file you want to upload to the CRM system and the destination object as shown below:

     

    The pathToFile field should be relative to the root of your SFTP server (ex: "/home/Public/test-docs/MockData-Accounts.csv"). The destinationObject is a string that represents the module type you would like to import this data into. Each CRM will have a different set of possible destination objects. So how do you know what value to put for this parameter? I suggest using the API docs for the CRM adapter instance. You can invoke the GET /objects endpoint to retrieve a list of all possible object types (and their spelling and capitalizations) from the response. Remember to select your instance from the left side list so that you may use the Try It Out button. 

     

    Since we are looking to create Account records from our data, I see from the response of the GET /objects call that we need to set destinantionObject equal to "accounts" (all lowercase and plural). Now that the input JSON for our trigger is all set, all that's left is to click Run.

     

    Verifying the Procedure Execution

    After giving the procedure some time to complete, we can view details of the procedure executions to ensure they were successfully completed.

    Navigate to the Procedures instances page, hover over and click Executions.

     

    In the Procedure Executions column, click your procedure execution to view its steps.

     

    Here, we see all three steps of the procedure were successful. You can also click on an individual step to see more details. Once the procedure is executed, the data has been uploaded to Sugar Sell. Log in to Sugar Sell to see your newly created records.

     

    That's it! The process is complete. 

     

    So, what did this procedure actually do?

    As you can see, the procedure only has 3 steps: The trigger, a Javascript step that sets up values for the final step which is a File Stream step. That last one is where the magic happens. A File Stream step essentially does a download from one platform and an upload to another. In the first step called generateMetadata, we are simply defining values to be used in the Stream File step. Here's a bulleted list of the parameters used in the Stream File step, what they mean, and how they are defined:

    • Download Adapter Instance Variable
      • This is the Adapter Instance that you have authenticated for the source data. In our case, it should be an SFTP Adapter authenticated to your SFTP server where the CSV data file resides
      • Our procedure variable is called sourceInstance so we use the curly bracket notation to specify we are using a config variable called sourceInstance
    • Download Method
      • To grab the file from the SFTP server, we will invoke a GET endpoint
    • Download API
      • This is the path to the desired endpoint. Since we are looking to grab a file from the SFTP server, we are going to use the /files endpoint
    • Upload Adapter Instance Variable
      • Similarly to the Download Adapter Instance Variable, this is the config variable called targetInstance that we assigned when making the Procedure instance
      • We again need to use the curly bracket syntax
    • Upload Method
      • To send the file to the CRM, we will invoke a POST endpoint
    • Upload API
      • Which endpoint are we calling for the POST? In this example, we are going to call the bulk endpoint for the object type we are trying to create. In the Trigger Event JSON, we assigned pathToFile and destinationObject values to be used later. In order to use those values properly, we needed to create properly formatted parameters. In our generateMetadata procedure step, we grabbed the destinationObject and assigned it to a variable called objectName. So, our Upload API value should be /bulk/${steps.generateMetadata.objectName} which says grab the value of the objectName variable from the step called generateMetadata
    • Download Headers
      • This parameter is hidden by default. Click Show Advanced to reveal these additional parameters
      • We defined a downloadHeaders variable in the generateMetadata step that sets the acceptable mime type of files to download to 'text/csv' as a JSON object
    • Download Query
      • In the generateMetadata step, we also defined a variable called path that contains a JSON object with the value from the pathToFile value we defined in the trigger event JSON
      • We again need to use the curly bracket syntax
    • Upload Headers
      • We defined an uploadHeaders variable in the generateMetadata step that sets the acceptable content type to download to 'multipart/form-data' as a JSON object

     

    I hope that helps you understand what it takes to grab a csv file from an SFTP server and send it to a CRM as individual records. Play around with it and see what improvements you can make. I'd love to hear about it in the comments!

    Sugar Integrate makes integrating with (or migrating data between) systems so much easier than doing it manually through code. Even with this incredible IPaaS, however, some integrations can still feel daunting. So, the Sugar Integrate team has begun identifying some common integration and migration use cases that we think make sense to templatize (is that a word?).

     

    Today, we have published our first 2 Sugar Integrate template solutions. Each solution is a zip file containing almost everything you need to complete an advanced migration. We identified, created, and mapped the Common Resources. The formulas (and their sub formulas) have been completely written. All you need to do is install the components into Sugar Integrate, prepare your external systems (Sugar Sell, Salesforce, etc), and run the procedures.

     

    Don't worry. We've also created installation and implementation guides to go along with the templates.

     

    So, what do these particular templates do?

     

    Let's start with the CRM Data Migrator for Sales template. We designed this template to facilitate the migration of records from a CRM (the documentation focuses on Salesforce) to Sugar Sell or Enterprise. It will copy ALL objects into Sugar - including custom modules. To move the custom modules, however, you will need to make some mappings on your own (because we don't know what your custom modules are called or where you want them to live in Sugar).

     

    Take a look at one record that was migrated from Salesforce to Sugar Sell:

     

    The second template we have created is the Opportunity to Cash template. This template automates the opportunity or quote to cash process between Sugar Sell or Sugar Enterprise and an ERP system. It supports synchronizing the account/contact data and financial history (such as invoices and payments) from the ERP system to Sugar. The template also supports automatic order or estimate creation in ERP when a quote is accepted or an opportunity closes in Sugar Sell or Sugar Enterprise.

     

    Here's an example of data that was migrated from QuickBooks Online to Sugar Sell:

     

    To access the templates, navigate to the Software Downloads page and select "Integrate" from the dropdown. Then simply select the correct version and package to download and follow the instructions in the Installation Guide.

     

    We really hope that you find these templates useful. We plan to create more so keep an eye out for future announcements of new templates.

     

    For more information on the new templates, please refer to the Sugar Integrate Template Release Notes. Here's a list of additional resources that may be of help for your migrations or integrations: