sourceName
stringclasses
1 value
url
stringlengths
52
145
action
stringclasses
1 value
body
stringlengths
0
60.5k
format
stringclasses
1 value
metadata
dict
title
stringlengths
5
125
updated
stringclasses
3 values
devcenter
https://www.mongodb.com/developer/products/realm/type-projections
created
# Realm-Swift Type Projections ## Introduction Realm natively provides a broad set of data types, including `Bool`, `Int`, `Float`, `Double`, `String`, `Date`, `ObjectID`, `List`, `Mutable Set`, `enum`, `Map`, … But, there are other data types that many of your iOS apps are likely to use. As an example, if you're using Core Graphics, then it's hard to get away without using types such as `CGFloat`, `CGPoint`, etc. When working with SwiftUI, you use the `Color` struct when working with colors. A typical design pattern is to persist data using types natively supported by Realm, and then use a richer set of types in your app. When reading data, you add extra boilerplate code to convert them to your app's types. When persisting data, you add more boilerplate code to convert your data back into types supported by Realm. That works fine and gives you complete control over the type conversions. The downside is that you can end up with dozens of places in your code where you need to make the conversion. Type projections still give you total control over how to map a `CGPoint` into something that can be persisted in Realm. But, you write the conversion code just once and then forget about it. The Realm-Swift SDK will then ensure that types are converted back and forth as required in the rest of your app. The Realm-Swift SDK enables this by adding two new protocols that you can use to extend any Swift type. You opt whether to implement `CustomPersistable` or the version that's allowed to fail (`FailableCustomPersistable`): ```swift protocol CustomPersistable { associatedtype PersistedType init(persisted: PersistedType) var persistableValue: PersistedType { get } } protocol FailableCustomPersistable { associatedtype PersistedType init?(persisted: PersistedType) var persistableValue: PersistedType { get } } ``` In this post, I'll show how the Realm-Drawing app uses type projections to interface between Realm and Core Graphics. ## Prerequisites - iOS 15+ - Xcode 13.2+ - Realm-Swift 10.21.0+ ## The Realm-Drawing App Realm-Drawing is a simple, collaborative drawing app. If two people log into the app using the same username, they can work on a drawing together. All strokes making up the drawing are persisted to Realm and then synced to all other instances of the app where the same username is logged in. It's currently iOS-only, but it would also sync with any Android drawing app that is connected to the same Realm back end. ## Using Type Projections in the App The Realm-Drawing iOS app uses three types that aren't natively supported by Realm: - `CGFloat` - `CGPoint` - `Color` (SwiftUI) In this section, you'll see how simple it is to use type projections to convert them into types that can be persisted to Realm and synced. ### Realm Schema (The Model) An individual drawing is represented by a single `Drawing` object: ```swift class Drawing: Object, ObjectKeyIdentifiable { @Persisted(primaryKey: true) var _id: ObjectId @Persisted var name = UUID().uuidString @Persisted var lines = RealmSwift.List() } ``` A Drawing contains a `List` of `Line` objects: ```swift class Line: EmbeddedObject, ObjectKeyIdentifiable { @Persisted var lineColor: Color @Persisted var lineWidth: CGFloat = 5.0 @Persisted var linePoints = RealmSwift.List() } ``` It's the `Line` class that uses the non-Realm-native types. Let's see how each type is handled. #### CGFloat I extend `CGFloat` to conform to Realm-Swift's `CustomPersistable` protocol. All I needed to provide was: - An initializer to convert what's persisted in Realm (a `Double`) into the `CGFloat` used by the model - A method to convert a `CGFloat` into a `Double`: ```swift extension CGFloat: CustomPersistable { public typealias PersistedType = Double public init(persistedValue: Double) { self.init(persistedValue) } public var persistableValue: Double { Double(self) } } ``` The `view` can then use `lineWidth` from the model object without worrying about how it's converted by the Realm SDK: ```swift context.stroke(path, with: .color(line.lineColor), style: StrokeStyle( lineWidth: line.lineWidth, lineCap: .round, l ineJoin: .round ) ) ``` #### CGPoint `CGPoint` is a little trickier, as it can't just be cast into a Realm-native type. `CGPoint` contains the x and y coordinates for a point, and so, I create a Realm-friendly class (`PersistablePoint`) that stores just that—`x` and `y` values as `Doubles`: ```swift public class PersistablePoint: EmbeddedObject, ObjectKeyIdentifiable { @Persisted var x = 0.0 @Persisted var y = 0.0 convenience init(_ point: CGPoint) { self.init() self.x = point.x self.y = point.y } } ``` I implement the `CustomPersistable` protocol for `CGPoint` by mapping between a `CGPoint` and the `x` and `y` coordinates within a `PersistablePoint`: ```swift extension CGPoint: CustomPersistable { public typealias PersistedType = PersistablePoint public init(persistedValue: PersistablePoint) { self.init(x: persistedValue.x, y: persistedValue.y) } public var persistableValue: PersistablePoint { PersistablePoint(self) } } ``` #### SwiftUI.Color `Color` is made up of the three RGB components plus the opacity. I use the `PersistableColor` class to persist a representation of `Color`: ```swift public class PersistableColor: EmbeddedObject { @Persisted var red: Double = 0 @Persisted var green: Double = 0 @Persisted var blue: Double = 0 @Persisted var opacity: Double = 0 convenience init(color: Color) { self.init() if let components = color.cgColor?.components { if components.count >= 3 { red = components0] green = components[1] blue = components[2] } if components.count >= 4 { opacity = components[3] } } } } ``` The extension to implement `CustomPersistable` for `Color` provides methods to initialize `Color` from a `PersistableColor`, and to generate a `PersistableColor` from itself: ```swift extension Color: CustomPersistable { public typealias PersistedType = PersistableColor public init(persistedValue: PersistableColor) { self.init( .sRGB, red: persistedValue.red, green: persistedValue.green, blue: persistedValue.blue, opacity: persistedValue.opacity) } public var persistableValue: PersistableColor { PersistableColor(color: self) } } ``` The [view can then use `selectedColor` from the model object without worrying about how it's converted by the Realm SDK: ```swift context.stroke( path, with: .color(line.lineColor), style: StrokeStyle(lineWidth: line.lineWidth, lineCap: .round, lineJoin: .round) ) ``` ## Conclusion Type projections provide a simple, elegant way to convert any type to types that can be persisted and synced by Realm. It's your responsibility to define how the mapping is implemented. After that, the Realm SDK takes care of everything else. Please provide feedback and ask any questions in the Realm Community Forum.
md
{ "tags": [ "Realm", "Swift" ], "pageDescription": "Simply persist and sync Swift objects containing any type in Realm", "contentType": "Tutorial" }
Realm-Swift Type Projections
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/typescript/myleg
created
# myLeG ## Creator Justus Alvermann, student in Germany, developed this project. ## About the Project The project shows the substitutions of my school in a more readable way and also sorted, so the users only see the entries that are relevant to them. It can also send out push notifications for new or changed substitutions and has some information about the current COVID regulations ## Inspiration I didn't like the current way substitutions are presented and also wanted a way to be notified about upcoming substitutions. In addition, I was tired of coming to school even though the first lessons were cancelled because I forgot to look at the substitution schedule. ## Why MongoDB? Since not every piece of information (e.g. new room for cancelled lessons) on the substitution plan is available for all entries, a document-based solution was the only sensible database. ## How It Works Every 15 minutes, a NodeJS script crawls the substitution plan of my school and saves all new or changed entires into my MongoDB collection. This script also sends out push notifications via the web messaging api to the users who subscribed to them. I used Angular for the frontend and Vercel Serverless functions for the backend. The serverless functions get the information from the database and can be queried via their Rest API. The login credentials are stored in MongoDB too and logins are saved as JWTs in the users cookies.
md
{ "tags": [ "TypeScript", "Atlas", "JavaScript", "Vercel", "Serverless" ], "pageDescription": "This project downloads the substitution plan of my school and converts it into a user-friendly page.", "contentType": "Code Example" }
myLeG
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/realm/realm-ios-database-access-using-realm-studio
created
# Accessing Realm Data on iOS Using Realm Studio The Realm makes it much faster to develop mobile applications. Realm Studio is a desktop app that lets you view, manipulate, and import data held within your mobile app's Realm database. This article steps through how to track down the locations of your iOS Realm database files, open them in Realm Studio, view the data, and make changes. If you're developing an Android app, then finding the Realm database files will be a little different (we'll follow up with an Android version later), but if you can figure out how to locate the Realm file, then the instructions on using Realm Studio should work. ## Prerequisites If you want to build and run the app for yourself, this is what you'll need: - A Mac. - Xcode – any reasonably recent version will work. - Realm Studio 10.1.0+ – earlier versions had an issue when working with Realms using Atlas Device Sync. I'll be using the data and schema from my existing RChat iOS app. You can use any Realm-based app, but if you want to understand more about RChat, check out Building a Mobile Chat App Using Realm – Data Architecture and Building a Mobile Chat App Using Realm – Integrating Realm into Your App. ## Walkthrough This walkthrough shows you how to: - Install & Run Realm Studio - Track Down Realm Data Files – Xcode Simulator - Track Down Realm Data Files – Real iOS Devices - View Realm Data - Add, Modify, and Delete Data ### Install & Run Realm Studio I'm using Realm Studio 10.1.1 to create this article. If you have 10.1.0 installed, then that should work too. If you don't already have Realm Studio 10.1.0+ installed, then download it here and install. That's it. But, when you open the app, you're greeted by this: There's a single button letting me "Open Realm file," and when I click on it, I get a file explorer where I can browse my laptop's file system. Where am I supposed to find my Realm file? I cover that in the next section. ### Track Down Realm Data Files – Xcode Simulator If you're running your app in one of Xcode's simulators, then the Realm files are stored in your Mac's file system. They're typically somewhere along the lines of `~/Library/Developer/CoreSimulator/Devices/???????/data/Containers/Data/Application/???????/Documents/mongodb-realm/???????/????????/???????.realm`. The scientific way to find the file's location is to add some extra code to your app or to use a breakpoint. While my app's in development, I'll normally print the location of a Realm's file whenever I open it. Don't worry if you're not explicitly opening your Realm(s) in your code (e.g., if you're using the default realm) as I'll cover the file search approach soon. This is the code to add to your app once you've opened your realm – `realm`: ``` swift print("User Realm User file location: \(realm.configuration.fileURL!.path)") ``` If you don't want to edit the code, then an Xcode breakpoint delivers the same result: Once you have the file location, open it in Realm Studio from the terminal: ``` bash open /Users/andrew.morgan/Library/Developer/CoreSimulator/Devices/E7526AFE-E886-490A-8085-349C8E8EDC5B/data/Containers/Data/Application/C3ADE2F2-ABF0-4BD0-9F47-F21894E850DB/Documents/mongodb-realm/rchat-saxgm/60099aefb33c57e9a9828d23/%22user%3D60099aefb33c57e9a9828d23%22.realm ``` Less scientific but simpler is to take advantage of the fact that the data files will always be of type `realm` and located somewhere under `~/Library/Developer/CoreSimulator/Devices`. Open Finder in that folder: `open ~/Library/Developer/CoreSimulator/Devices` and then create a "saved search" so that you can always find all of your realm files. You'll most often be looking for the most recent one. The nice thing about this approach is that you only need to create the search once. Then click on "Realms," find the file you need, and then double-click it to open it in Realm Studio. ### Track Down Realm Data Files – Real iOS Devices Unfortunately, you can't use Realm Studio to interact with live Realm files on a real iOS device. What we can do is download a copy of your app's Realm files from your iOS device to your laptop. You need to connect your iOS device to your Mac, agree to trust the computer, etc. Once connected, you can use Xcode to download a copy of the "Container" for your app. Open the Xcode device manager—"Window/Devices and Simulators." Find your device and app, then download the container: Note that you can only access the containers for apps that you've built and installed through Xcode, not ones you've installed through the App Store. Right-click the downloaded file and "Show Package Contents." You'll find your Realm files under `AppData/Documents/mongodb-realm//?????`. Find the file for the realm you're interested in and double-click it to open it in Realm Studio. ### View Realm Data After opening a Realm file, Realm Studio will show a window with all of the top-level Realm Object classes in use by your app. In this example, the realm I've opened only contains instances of the `Chatster` class. There's a row for each `Chatster` Object that I'd created through the app: If there are a lot of objects, then you can filter them using a simple query syntax: If the Realm Object class contains a `List` or an `EmbeddedObject`, then they will show as blue links—in this example, `conversations` and `userPreferences` are a list of `Conversation` objects and an embedded `UserPreferences` object respectively: Clicking on one of the `UserPreferences` links brings up the contents of the embedded object: ### Add, Modify, and Delete Data The ability to view your Realm data is invaluable for understanding what's going on inside your app. Realm Studio takes it a step further by letting you add, modify, and delete data. This ability helps to debug and test your app. As a first example, I click on "Create ChatMessage" to add a new message to a conversation: Fill out the form and click "Create" to add the new `ChatMessage` object: We can then observe the effect of that change in our app: I could have tested that change using the app, but there are different things that I can try using Realm Studio. I haven't yet included the ability to delete or edit existing messages, but I can now at least test that this view can cope when the data changes: ## Summary In this article, we've seen how to find and open your iOS Realm data files in Realm Studio. We've viewed the data and then made changes and observed the iOS app reacting to those changes. Realm Studio has several other useful features that I haven't covered here. As it's a GUI, it's fairly easy to figure out how to use them, and the docs are available if you get stuck. These functions include: - Import data into Realm from a CSV file. - Export your Realm data as a JSON file. - Edit the schema. - Open the Realm file from an app and export the schema in a different language. We used this for the WildAid O-FISH project. I created the schema in the iOS app, and another developer exported a Kotlin version of the schema from Realm Studio to use in the Android app. ## References - GitHub Repo for RChat App. - Read Building a Mobile Chat App Using Realm – Data Architecture to understand the data model and partitioning strategy behind the RChat app. - Read Building a Mobile Chat App Using Realm – Integrating Realm into Your App to learn how to create the RChat app. - If you're building your first SwiftUI/Realm app, then check out Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine. - GitHub Repo for Realm-Cocoa SDK. - Realm Cocoa SDK documentation. - MongoDB's Realm documentation. > > >If you have questions, please head to our developer community >website where the MongoDB engineers and >the MongoDB community will help you build your next big idea with >MongoDB. > >
md
{ "tags": [ "Realm", "Swift", "iOS", "Postman API" ], "pageDescription": "Discover how to access and manipulate your iOS App's Realm data using the Realm Studio GUI.", "contentType": "Tutorial" }
Accessing Realm Data on iOS Using Realm Studio
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/realm/saving-data-in-unity3d-using-binary-reader-writer
created
# Saving Data in Unity3D Using BinaryReader and BinaryWriter (Part 3 of the Persistence Comparison Series) Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well. In Part 1 of this series, we explored Unity's own solution: `PlayerPrefs`. This time, we look into one of the ways we can use the underlying .NET framework by saving files. Here is an overview of the complete series: - Part 1: PlayerPrefs - Part 2: Files - Part 3: BinaryReader and BinaryWriter *(this tutorial)* - Part 4: SQL *(coming soon)* - Part 5: Realm Unity SDK - Part 6: Comparison of all those options Like Part 1 and 2, this tutorial can also be found in the https://github.com/realm/unity-examples repository on the persistence-comparison branch. Each part is sorted into a folder. The three scripts we will be looking at are in the `BinaryReaderWriter` sub folder. But first, let's look at the example game itself and what we have to prepare in Unity before we can jump into the actual coding. ## Example game *Note that if you have worked through any of the other tutorials in this series, you can skip this section since we are using the same example for all parts of the series so that it is easier to see the differences between the approaches.* The goal of this tutorial series is to show you a quick and easy way to make some first steps in the various ways to persist data in your game. Therefore, the example we will be using will be as simple as possible in the editor itself so that we can fully focus on the actual code we need to write. A simple capsule in the scene will be used so that we can interact with a game object. We then register clicks on the capsule and persist the hit count. When you open up a clean 3D template, all you need to do is choose `GameObject` -> `3D Object` -> `Capsule`. You can then add scripts to the capsule by activating it in the hierarchy and using `Add Component` in the inspector. The scripts we will add to this capsule showcasing the different methods will all have the same basic structure that can be found in `HitCountExample.cs`. ```cs using UnityEngine; /// /// This script shows the basic structure of all other scripts. /// public class HitCountExample : MonoBehaviour { // Keep count of the clicks. SerializeField] private int hitCount; // 1 private void Start() // 2 { // Read the persisted data and set the initial hit count. hitCount = 0; // 3 } private void OnMouseDown() // 4 { // Increment the hit count on each click and save the data. hitCount++; // 5 } } ``` The first thing we need to add is a counter for the clicks on the capsule (1). Add a `[SerilizeField]` here so that you can observe it while clicking on the capsule in the Unity editor. Whenever the game starts (2), we want to read the current hit count from the persistence and initialize `hitCount` accordingly (3). This is done in the `Start()` method that is called whenever a scene is loaded for each game object this script is attached to. The second part to this is saving changes, which we want to do whenever we register a mouse click. The Unity message for this is `OnMouseDown()` (4). This method gets called every time the `GameObject` that this script is attached to is clicked (with a left mouse click). In this case, we increment the `hitCount` (5) which will eventually be saved by the various options shown in this tutorials series. ## BinaryReader and BinaryWriter (See `BinaryReaderWriterExampleSimple.cs` in the repository for the finished version.) In the previous tutorial, we looked at `Files`. This is not the only way to work with data in files locally. Another option that .NET is offering us is the [`BinaryWriter` and BinaryReader. > The BinaryWriter class provides methods that simplify writing primitive data types to a stream. For example, you can use the Write method to write a Boolean value to the stream as a one-byte value. The class includes write methods that support different data types. Parts of this tutorial will look familiar if you have worked through the previous one. We will use `File` again here to create and open file streams which can then be used by the `BinaryWriter` to save data into those files. Let's have a look at what we have to change in the example presented in the previous section to save the data using `BinaryWriter` and then read it again using it's opposite `BinaryReader`: ```cs using System; using System.IO; using UnityEngine; public class BinaryReaderWriterExampleSimple : MonoBehaviour { // Resources: // https://docs.microsoft.com/en-us/dotnet/api/system.io.binarywriter?view=net-5.0 // https://docs.microsoft.com/en-us/dotnet/api/system.io.binaryreader?view=net-5.0 // https://docs.microsoft.com/en-us/dotnet/api/system.io.filestream?view=net-5.0 // https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/using-statement // https://docs.microsoft.com/en-us/dotnet/api/system.io.stream?view=net-5.0 SerializeField] private int hitCount = 0; private const string HitCountFile = "BinaryReaderWriterExampleSimple"; // 1 private void Start() // 7 { // Check if the file exists to avoid errors when opening a non-existing file. if (File.Exists(HitCountFile)) // 8 { // Open a stream to the file that the `BinaryReader` can use to read data. // They need to be disposed at the end, so `using` is good practice // because it does this automatically. using FileStream fileStream = File.Open(HitCountFile, FileMode.Open); // 9 using BinaryReader binaryReader = new(fileStream); // 10 hitCount = binaryReader.ReadInt32(); // 11 } } private void OnMouseDown() // 2 { hitCount++; // 3 // Open a stream to the file that the `BinaryReader` can use to read data. // They need to be disposed at the end, so `using` is good practice // because it does this automatically. using FileStream fileStream = File.Open(HitCountFile, FileMode.Create); // 4 using BinaryWriter binaryWriter = new(fileStream); // 5 binaryWriter.Write(hitCount); // 6 } } ``` First we define a name for the file that will hold the data (1). If no additional path is provided, the file will just be saved in the project folder when running the game in the Unity editor or the game folder when running a build. This is fine for the example. Whenever we click on the capsule (2) and increment the hit count (3), we need to save that change. First, we open the file that is supposed to hold the data (4) by calling `File.Open`. It takes two parameters: the file name, which we defined already, and a `FileMode`. Since we want to create a new file, the `FileMode.Create` option is the right choice here. Using this `FileStream`, we then create a new `BinaryWriter` that takes the stream as an argument (5). After that, we can simply write the current `hitCount` to the file using `Write()` (6). The next time we start the game (7), we check if the file that we saved our data to already exists. If so, it means we have saved data before and can now read it. Once again, we create a new `Filestream` (9) first, this time using the `FileMode.Open` option. To read the data from the file, we need to use the `BinaryReader` (10), which also gets initialized with the `FileStream` identical to the `BinaryWriter`. Finally, using `ReadInt32()`, we can read the hit count from the file and assign it to `hitCount`. Let's look into extending this simple example in the next section. ## Extended example (See `BinaryReaderWriterExampleExtended.cs` in the repository for the finished version.) The previous section showed the most simple example, using just one variable that needs to be saved. What if we want to save more than that? Depending on what needs to be saved, there are several different approaches. You could use multiple files or you can write multiple variables inside the same file. The latter shall be shown in this section by extending the game to recognize modifier keys. We want to detect normal clicks, Shift+Click, and Control+Click. First, update the hit counts so that we can save three of them: ```cs [SerializeField] private int hitCountUnmodified = 0; [SerializeField] private int hitCountShift = 0; [SerializeField] private int hitCountControl = 0; ``` We also want to use a different file name so we can look at both versions next to each other: ```cs private const string HitCountFile = "BinaryReaderWriterExampleExtended"; ``` The last field we need to define is the key that is pressed: ```cs private KeyCode modifier = default; ``` The first thing we need to do is check if a key was pressed and which key it was. Unity offers an easy way to achieve this using the [`Input` class's `GetKey()` function. It checks if the given key was pressed or not. You can pass in the string for the key or, to be a bit more safe, just use the `KeyCode` enum. We cannot use this in the `OnMouseClick()` when detecting the mouse click though: > Note: Input flags are not reset until Update. You should make all the Input calls in the Update Loop. Add a new method called `Update()` (1) which is called in every frame. Here we need to check if the `Shift` or `Control` key was pressed (2) and if so, save the corresponding key in `modifier` (3). In case none of those keys was pressed (4), we consider it unmodified and reset `modifier` to its `default` (5). ```cs private void Update() // 1 { // Check if a key was pressed. if (Input.GetKey(KeyCode.LeftShift)) // 2 { // Set the LeftShift key. modifier = KeyCode.LeftShift; // 3 } else if (Input.GetKey(KeyCode.LeftControl)) // 2 { // Set the LeftControl key. modifier = KeyCode.LeftControl; // 3 } else // 4 { // In any other case reset to default and consider it unmodified. modifier = default; // 5 } } ``` Now to saving the data when a click happens: ```cs private void OnMouseDown() // 6 { // Check if a key was pressed. switch (modifier) { case KeyCode.LeftShift: // 7 // Increment the Shift hit count. hitCountShift++; // 8 break; case KeyCode.LeftControl: // 7 // Increment the Control hit count. hitCountControl++; // 8 break; default: // 9 // If neither Shift nor Control was held, we increment the unmodified hit count. hitCountUnmodified++; // 10 break; } // Open a stream to the file that the `BinaryReader` can use to read data. // They need to be disposed at the end, so `using` is good practice // because it does this automatically. using FileStream fileStream = File.Open(HitCountFile, FileMode.Create); // 11 using BinaryWriter binaryWriter = new(fileStream, Encoding.UTF8); // 12 binaryWriter.Write(hitCountUnmodified); // 13 binaryWriter.Write(hitCountShift); // 13 binaryWriter.Write(hitCountControl); // 13 } ``` Whenever a mouse click is detected on the capsule (6), we can then perform a similar check to what happened in `Update()`, only we use `modifier` instead of `Input.GetKey()` here. Check if `modifier` was set to `KeyCode.LeftShift` or `KeyCode.LeftControl` (7) and if so, increment the corresponding hit count (8). If no modifier was used (9), increment the `hitCountUnmodified` (10). Similar to the simple version, we create a `FileStream` (11) and with it the `BinaryWriter` (12). Writing multiple variables into the file can simply be achieved by calling `Write()` multiple times (13), once for each hit count that we want to save. Start the game, and click the capsule using Shift and Control. You should see the three counters in the Inspector. After stopping the game and therefore saving the data, a new file `BinaryReaderWriterExampleExtended` should exist in your project folder. Have a look at it. It should look something like this: The three hit counters can be seen in there and correspond to the values in the inspector: - `0f` == 15 - `0c` == 12 - `05` == 5 Last but not least, let's look at how to load the file again when starting the game (14): ```cs private void Start() // 14 { // Check if the file exists to avoid errors when opening a non-existing file. if (File.Exists(HitCountFile)) // 15 { // Open a stream to the file that the `BinaryReader` can use to read data. // They need to be disposed at the end, so `using` is good practice // because it does this automatically. using FileStream fileStream = File.Open(HitCountFile, FileMode.Open); // 16 using BinaryReader binaryReader = new(fileStream); // 17 hitCountUnmodified = binaryReader.ReadInt32(); // 18 hitCountShift = binaryReader.ReadInt32(); // 18 hitCountControl = binaryReader.ReadInt32(); // 18 } } ``` First, we check if the file even exists (15). If we ever saved data before, this should be the case. If it exists, we read the databy creating a `FileStream` again (16) and opening a `BinaryReader` with it (17). Similar to writing with `Write()` (on the `BinaryWriter`), we use `ReadInt32()` (18) to read an `integer`. We do this three times since we saved them all individually. Note that knowing the structure of the file is necessary here. If we saved an `integers`, a `boolean`, and a `string`, we would have to use `ReadInt32()`, `ReadBoolean()`, and `ReadString()`. The more complex data gets, the more complicated it will be to make sure there are no mistakes in the structure when reading or writing it. Different types, adding and removing variables, changing the structure. The more data we want to add to this file, the more it makes sense to think about alternatatives. For this tutorial, we will stick with the `BinaryReader` and `BinaryWriter` and see what we can do to decrease the complexity a bit when adding more data. One of those options will be shown in the next section. ## More complex data (See `BinaryReaderWriterExampleJson.cs` in the repository for the finished version.) JSON is a very common approach when saving structured data. It's easy to use and there are frameworks for almost every language. The .NET framework provides a `JsonSerializer`. Unity has its own version of it: `JsonUtility`. As you can see in the documentation, the functionality boils down to these three methods: - *FromJson()*: Create an object from its JSON representation. - *FromJsonOverwrite()*: Overwrite data in an object by reading from its JSON representation. - *ToJson()*: Generate a JSON representation of the public fields of an object. The `JsonUtility` transforms JSON into objects and back. Therefore, our first change to the previous section is to define such an object with public fields: ```cs private class HitCount { public int Unmodified; public int Shift; public int Control; } ``` The class itself can be `private` and just be added inside the `BinaryReaderWriterExampleJson` class, but its fields need to be public. As before, we use a different file to save this data. Update the filename to: ```cs private const string HitCountFile = "BinaryReaderWriterExampleJson"; ``` When saving the data, we will use the same `Update()` method as before to detect which key was pressed. The first part of `OnMouseDown()` (1) can stay the same as well, since this part only increments the hit count depending on the modifier used. ```cs private void OnMouseDown() // 1 { // Check if a key was pressed. switch (modifier) { case KeyCode.LeftShift: // Increment the Shift hit count. hitCountShift++; break; case KeyCode.LeftControl: // Increment the Control hit count. hitCountControl++; break; default: // If neither Shift nor Control was held, we increment the unmodified hit count. hitCountUnmodified++; break; } // 2 // Create a new HitCount object to hold this data. var updatedCount = new HitCount { Unmodified = hitCountUnmodified, Shift = hitCountShift, Control = hitCountControl, }; // 3 // Create a JSON using the HitCount object. var jsonString = JsonUtility.ToJson(updatedCount, true); // Open a stream to the file that the `BinaryReader` can use to read data. // They need to be disposed at the end, so `using` is good practice // because it does this automatically. using FileStream fileStream = File.Open(HitCountFile, FileMode.Create); // 5 using BinaryWriter binaryWriter = new(fileStream, Encoding.UTF8); // 6 binaryWriter.Write(jsonString); // 7 } ``` However, we need to update the second part. Instead of a string array, we create a new `HitCount` object and set the three public fields to the values of the hit counters (2). Using `JsonUtility.ToJson()`, we can transform this object to a string (3). If you pass in `true` for the second, optional parameter, `prettyPrint`, the string will be formatted in a nicely readable way. Finally, as before, we create a `FileStream` (5) and `BinaryWriter` (6) and use `Write()` (7) to write the `jsonString` into the file. Then, when the game starts (8), we need to read the data back into the hit count fields: ```cs private void Start() // 8 { // Check if the file exists to avoid errors when opening a non-existing file. if (File.Exists(HitCountFile)) // 9 { // Open a stream to the file that the `BinaryReader` can use to read data. // They need to be disposed at the end, so `using` is good practice // because it does this automatically. using FileStream fileStream = File.Open(HitCountFile, FileMode.Open); // 10 using BinaryReader binaryReader = new(fileStream); // 11 // 12 var jsonString = binaryReader.ReadString(); var hitCount = JsonUtility.FromJson(jsonString); // 13 if (hitCount != null) { // 14 hitCountUnmodified = hitCount.Unmodified; hitCountShift = hitCount.Shift; hitCountControl = hitCount.Control; } } } ``` We check if the file exists first (9). In case it does, we saved data before and can proceed reading it. Using a `FileStream` again (10) with `FileMode.Open`, we create a `BinaryReader` (11). Since we are reading a json string, we need to use `ReadString()` (12) this time and then transform it via `FromJson()` into a `HitCount` object. If this worked out (13), we can then extract `hitCountUnmodified`, `hitCountShift`, and `hitCountControl` from it (14). Note that the data is saved in a binary format, which is, of course, not safe. Tools to read binary are available and easy to find. For example, this `BinaryReaderWriterExampleJson` file read with `bless` would result in this: You can clearly identify the three values we saved. While the `BinaryReader` and `BinaryWriter` are a simple and easy way to save data and they at least offer a way so that the data is not immidiately readable, they are by no means safe. In a future tutorial, we will look at encryption and how to improve safety of your data along with other useful features like migrations and performance improvements. ## Conclusion In this tutorial, we learned how to utilize `BinaryReader` and `BinaryWriter` to save data. `JsonUtility` helps structure this data. They are simple and easy to use, and not much code is required. What are the downsides, though? First of all, we open, write to, and save the file every single time the capsule is clicked. While not a problem in this case and certainly applicable for some games, this will not perform very well when many save operations are made when your game gets a bit more complex. Also, the data is saved in a readable format and can easily be edited by the player. The more complex your data is, the more complex it will be to actually maintain this approach. What if the structure of the `HitCount` object changes? You have to account for that when loading an older version of the JSON. Migrations are necessary. In the following tutorials, we will have a look at how databases can make this job a lot easier and take care of the problems we face here. Please provide feedback and ask any questions in the Realm Community Forum.
md
{ "tags": [ "Realm", "Unity", ".NET" ], "pageDescription": "Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well. In this tutorial series, we will explore the options given to us by Unity and third-party libraries.", "contentType": "Tutorial" }
Saving Data in Unity3D Using BinaryReader and BinaryWriter
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/typescript/twitter-trend-analyser
created
# Trends analyser ## Creators Osama Bin Junaid contributed this project. ## About the Project The project uses twitter api to fetch realtime trends data and save it into MongoDB for later analysis. ## Inspiration In today's world its very hard to keep up with everything thats happening around us. Twitter is one of the first places where things gets reported so my motive was to build an application through which one can see all trends at one place, and also why something is trending.(trying to solve this one now) ## Why MongoDB? I used MongoDB because of its Document nature, I can directly save my JSON objects without breaking down into tables, and also because its easy to design schemas and their relationships using MongoDB ## How It Works Its works by repeatedly invoking 8 serverless functions on ibmcloud at 15 minutes interval, these functions call twitter apis get the data, and do little transformation before saving the data to Mongodb. The backend then serves the data to the react frontend. GitHub repo frontend: https://github.com/ibnjunaid/trendsFunction GitHub repo backend: https://github.com/ibnjunaid/trendsBackend
md
{ "tags": [ "TypeScript", "Atlas", "JavaScript" ], "pageDescription": "Analyse how hashtags on twitter change over time. ", "contentType": "Code Example" }
Trends analyser
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/csharp/mongodb-geospatial-queries-csharp
created
# MongoDB Geospatial Queries in C# # MongoDB Geospatial Queries with C# If you've ever glanced at a map to find the closest lunch spots to you, you've most likely used a geospatial query under the hood! Using GeoJSON objects to store geospatial data in MongoDB Atlas, you can create your own geospatial queries for your application. In this tutorial, we'll see how to work with geospatial queries in the MongoDB C# driver. ## Quick Jump * What are Geospatial Queries? * GeoJSON * Prerequisities * Atlas Setup * Download and Import Sample Data * Create 2dsphere Indexes * Creating the Project * Geospatial Query Code Examples * $near * $geoWithin * $geoIntersects * $geoWithin and $center Combined * $geoWithin and $centerSphere Combined * Spherical Geometry Calculations with Radians ## What are Geospatial Queries? Geospatial queries allow you to work with geospatial data. Whether that's on a 2d space (like a flat map) or 3d space (when viewing a spherical representation of the world), geospatial data allows you to find areas and places in reference to a point or set of points. These might sound complicated, but you've probably encountered these use cases in everyday life: searching for points of interest in a new city you're exploring, discovering which coffee shops are closest to you, or finding every bakery within a three-mile radius of your current position (for science!). These kinds of queries can easily be done with special geospatial query operators in MongoDB. And luckily for us, these operators are also implemented in most of MongoDB's drivers, including the C# driver we'll be using in this tutorial. ### GeoJSON One important aspect of working with geospatial data is something called the GeoJSON format. It's an open standard for representing simple geographical features and makes it easier to work with geospatial data. Here's what some of the GeoJSON object types look like: ``` JSON // Point GeoJSON type { "type" : "Point", "coordinates" : -115.20146200000001, 36.114704000000003] } // Polygon GeoJSON type { "type": "Polygon", "coordinates": [ [ [100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0] ] ] } ``` While MongoDB supports storing your geospatial data as [legacy coordinate pairs, it's preferred to work with the GeoJSON format as it makes complicated queries possible and much simpler. > 💡 Whether working with coordinates in the GeoJSON format or as legacy coordinate pairs, queries require the **longitude** to be passed first, followed by **latitude**. This might seem "backwards" compared to what you may be used to, but be assured that this format actually follows the `(X, Y)` order of math! Keep this in mind as MongoDB geospatial queries will also require coordinates to be passed in `longitude, latitude]` format where applicable. Alright, let's get started with the tutorial! ## Prerequisites * [Visual Studio Community (2019 or higher) * MongoDB C#/.NET Driver (latest preferred, minimum 2.11) * MongoDB Atlas cluster * mongosh ## Atlas Setup To make this tutorial easier to follow along, we'll work with the `restaurants` and `neighborhoods` datasets, both publicly available in our documentation. They are both `JSON` files that contain a sizable amount of New York restaurant and neighborhood data already in GeoJSON format! ### Download Sample Data and Import into Your Atlas Cluster First, download this `restaurants.json` file and this `neighborhoods.json` file. > 💡 These files differ from the the `sample_restaurants` dataset that can be loaded in Atlas! While the collection names are the same, the JSON files I'm asking you to download already have data in GeoJSON format, which will be required for this tutorial. Then, follow these instructions to import both datasets into your cluster. > 💡 When you reach Step 5 of importing your data into your cluster (*Run mongoimport*), be sure to keep track of the `database` and `collection` names you pass into the command. We'll need them later! If you want to use the same names as in this tutorial, my database is called `sample-geo` and my collections are called `restaurants` and `neighborhoods` . ### Create 2dsphere Indexes Lastly, to work with geospatial data, a 2dsphere index needs to be created for each collection. You can do this in the MongoDB Atlas portal. First navigate to your cluster and click on "Browse Collections": You'll be brought to your list of collections. Find your restaurant data (if following along, it will be a collection called `restaurants` within the `sample-geo` database). With the collection selected, click on the "Indexes" tab: Click on the "CREATE INDEX" button to open the index creation wizard. In the "Fields" section, you'll specify which *field* to create an index on, as well as what *type* of index. For our tutorial, clear the input, and copy and paste the following: ``` JSON { "location": "2dsphere" } ``` Click "Review". You'll be asked to confirm creating an index on `sample-geo.restaurants` on the field `{ "location": "2dsphere" }` (remember, if you aren't using the same database and collection names, confirm your index is being created on `yourDatabaseName.yourCollectionName`). Click "Confirm." Likewise, find your neighborhood data (`sample-geo.neighborhoods` unless you used different names). Select your `neighborhoods` collection and do the same thing, this time creating this index: ``` JSON { "geometry": "2dsphere" } ``` Almost instantly, the indexes will be created. You'll know the index has been successfully created once you see it listed under the Indexes tab for your selected collection. Now, you're ready to work with your restaurant and neighborhood data! ## Creating the Project To show these samples, we'll be working within the context of a simple console program. We'll implement each geospatial query operator as its own method and log the corresponding MongoDB Query it executes. After creating a new console project, add the MongoDB Driver to your project using the Package Manager or the .NET CLI: *Package Manager* ``` Install-Package MongoDB.Driver ``` *.NET CLI* ``` dotnet add package MongoDB.Driver ``` Next, add the following dependencies to your `Program.cs` file: ``` csharp using MongoDB.Bson; using MongoDB.Bson.IO; using MongoDB.Bson.Serialization; using MongoDB.Driver; using MongoDB.Driver.GeoJsonObjectModel; using System; ``` For all our examples, we'll be using the following `Restaurant` and `Neighborhood` classes as our models: ``` csharp public class Restaurant { public ObjectId Id { get; set; } public GeoJsonPoint Location { get; set; } public string Name { get; set; } } ``` ``` csharp public class Neighborhood { public ObjectId Id { get; set; } public GeoJsonPoint Geometry { get; set; } public string Name { get; set; } } ``` Add both to your application. For simplicity, I've added them as additional classes in my `Program.cs` file. Next, we need to connect to our cluster. Place the following code within the `Main` method of your program: ``` csharp // Be sure to update yourUsername, yourPassword, yourClusterName, and yourProjectId to your own! // Similarly, also update "sample-geo", "restaurants", and "neighborhoods" to whatever you've named your database and collections. var client = new MongoClient("mongodb+srv://yourUsername:[email protected]/sample-geo?retryWrites=true&w=majority"); var database = client.GetDatabase("sample-geo"); var restaurantCollection = database.GetCollection("restaurants"); var neighborhoodCollection = database.GetCollection("neighborhoods"); ``` Finally, we'll add a helper method called `Log()` within our `Program` class. This will take the geospatial queries we write in C# and log the corresponding MongoDB Query to the console. This gives us an easy way to copy it and use elsewhere. ``` csharp private static void Log(string exampleName, FilterDefinition filter) { var serializerRegistry = BsonSerializer.SerializerRegistry; var documentSerializer = serializerRegistry.GetSerializer(); var rendered = filter.Render(documentSerializer, serializerRegistry); Console.WriteLine($"{exampleName} example:"); Console.WriteLine(rendered.ToJson(new JsonWriterSettings { Indent = true })); Console.WriteLine(); } ``` We now have our structure in place. Now we can create the geospatial query methods! ## Geospatial Query Code Examples in C# Since MongoDB has dedicated operators for geospatial queries, we can take advantage of the C# driver's filter definition builder to build type-safe queries. Using the filter definition builder also provides both compile-time safety and refactoring support in Visual Studio, making it a great way to work with geospatial queries. ### $near Example in C# The `.Near` filter implements the $near geospatial query operator. Use this when you want to return geospatial objects that are in proximity to a center point, with results sorted from nearest to farthest. In our program, let's create a `NearExample()` method that does that. Let's search for restaurants that are *at most* 10,000 meters away and *at least* 2,000 meters away from a Magnolia Bakery (on Bleecker Street) in New York: ``` cs private static void NearExample(IMongoCollection collection) { // Instantiate builder var builder = Builders.Filter; // Set center point to Magnolia Bakery on Bleecker Street var point = GeoJson.Point(GeoJson.Position(-74.005, 40.7358879)); // Create geospatial query that searches for restaurants at most 10,000 meters away, // and at least 2,000 meters away from Magnolia Bakery (AKA, our center point) var filter = builder.Near(x => x.Location, point, maxDistance: 10000, minDistance: 2000); // Log filter we've built to the console using our helper method Log("$near", filter); } ``` That's it! Whenever we call this method, a `$near` query will be generated that you can copy and paste from the console. Feel free to paste that query into the data explorer in Atlas to see which restaurants match the filter (don't forget to change `"Location"` to a lowercase `"location"` when working in Atlas). In a future post, we'll delve into how to visualize these results on a map! For now, you can call this method (and all other following methods) from the `Main` method like so: ```cs static void Main(string] args) { var client = new MongoClient("mongodb+srv://yourUsername:[email protected]/sample-geo?retryWrites=true&w=majority"); var database = client.GetDatabase("sample-geo"); var restaurantCollection = database.GetCollection("restaurants"); var neighborhoodCollection = database.GetCollection("neighborhoods"); NearExample(restaurantCollection); // Add other methods here as you create them } ``` > ⚡ Feel free to modify this code! Change your center point by changing the coordinates or let the method accept variables for the `point`, `maxDistance`, and `minDistance` parameters instead of hard-coding it. In most use cases, `.Near` will do the trick. It measures distances against a flat, 2d plane ([Euclidean plane) that will be accurate for most applications. However, if you need queries to run against spherical, 3d geometry when measuring distances, use the `.NearSphere` filter (which implements the `$nearSphere` operator). It accepts the same parameters as `.Near`, but will calculate distances using spherical geometry. ### $geoWithin Example in C# The `.GeoWithin` filter implements the $geoWithin geospatial query operator. Use this when you want to return geospatial objects that exist entirely within a specified shape, either a GeoJSON `Polygon`, `MultiPolygon`, or shape defined by legacy coordinate pairs. As you'll see in a later example, that shape can be a circle and can be generated using the `$center` operator. To implement this in our program, let's create a `GeoWithinExample()` method that searches for restaurants within an area—specifically, this area: In code, we describe this area as a polygon and work with it as a list of points: ``` cs private static void GeoWithinExample(IMongoCollection collection) { var builder = Builders.Filter; // Build polygon area to search within. // This must always begin and end with the same coordinate // to "close" the polygon and fully surround the area. var coordinates = new GeoJson2DCoordinates] { GeoJson.Position(-74.0011869, 40.752482), GeoJson.Position(-74.007384, 40.743641), GeoJson.Position(-74.001856, 40.725631), GeoJson.Position(-73.978511, 40.726793), GeoJson.Position(-73.974408, 40.755243), GeoJson.Position(-73.981669, 40.766716), GeoJson.Position(-73.998423, 40.763535), GeoJson.Position(-74.0011869, 40.752482), }; var polygon = GeoJson.Polygon(coordinates); // Create geospatial query that searches for restaurants that fully fall within the polygon. var filter = builder.GeoWithin(x => x.Location, polygon); // Log the filter we've built to the console using our helper method. Log("$geoWithin", filter); } ``` ### $geoIntersects Example in C# The `.GeoIntersects` filter implements the [$geoIntersects geospatial query operator. Use this when you want to return geospatial objects that span the same area as a specified object, usually a point. For our program, let's create a `GeoIntersectsExample()` method that checks if a specified point falls within one of the neighborhoods stored in our neighborhoods collection: ``` cs private static void GeoIntersectsExample(IMongoCollection collection) { var builder = Builders.Filter; // Set specified point. For example, the location of a user (with granted permission) var point = GeoJson.Point(GeoJson.Position(-73.996284, 40.720083)); // Create geospatial query that searches for neighborhoods that intersect with specified point. // In other words, return results where the intersection of a neighborhood and the specified point is non-empty. var filter = builder.GeoIntersects(x => x.Geometry, point); // Log the filter we've built to the console using our helper method. Log("$geoIntersects", filter); } ``` > 💡 For this method, an overloaded `Log()` method that accepts a `FilterDefinition` of type `Neighborhood` needs to be created. ### Combined $geoWithin and $center Example in C# As we've seen, the `$geoWithin` operator returns geospatial objects that exist entirely within a specified shape. We can set this shape to be a circle using the `$center` operator. Let's create a `GeoWithinCenterExample()` method in our program. This method will search for all restaurants that exist within a circle that we have centered on the Brooklyn Bridge: ``` cs private static void GeoWithinCenterExample(IMongoCollection collection) { var builder = Builders.Filter; // Set center point to Brooklyn Bridge var point = GeoJson.Point(GeoJson.Position(-73.99631, 40.705396)); // Create geospatial query that searches for restaurants that fall within a radius of 20 (units used by the coordinate system) var filter = builder.GeoWithinCenter(x => x.Location, point.Coordinates.X, point.Coordinates.Y, 20); Log("$geoWithin.$center", filter); } ``` ### Combined $geoWithin and $centerSphere Example in C# Another way to query for places is by combining the `$geoWithin` and `$centerSphere` geospatial query operators. This differs from the `$center` operator in a few ways: * `$centerSphere` uses spherical geometry while `$center` uses flat geometry for calculations. * `$centerSphere` works with both GeoJSON objects and legacy coordinate pairs while `$center` *only* works with and returns legacy coordinate pairs. * `$centerSphere` uses radians for distance, which requires additional calculations to produce an accurate query. `$center` uses the units used by the coordinate system and may be less accurate for some queries. We'll get to our example method in a moment, but first, a little context on how to calculate radians for spherical geometry! #### Spherical Geometry Calculations with Radians > 💡 An important thing about working with `$centerSphere` (and any other geospatial operators that use spherical geometry), is that it uses *radians* for distance. This means the distance units used in queries (miles or kilometers) first need to be converted to radians. Using radians properly considers the spherical nature of the object we're measuring (usually Earth) and let's the `$centerSphere` operator calculate distances correctly. Use this handy chart to convert between distances and radians: | Conversion | Description | Example Calculation | | ---------- | ----------- | ------------------- | | *distance (miles) to radians* | Divide the distance by the radius of the sphere (e.g., the Earth) in miles. The equitorial radius of the Earth in miles is approximately `3,963.2`. | Search for objects with a radius of 100 miles: `100 / 3963.2` | | *distance (kilometers) to radians* | Divide the distance by the radius of the sphere (e.g., the Earth) in kilometers. The equitorial radius of the Earth in kilometers is approximately `6,378.1`. | Search for objects with a radius of 100 kilometers: `100 / 6378.1` | | *radians to distance(miles)* | Multiply the radian measure by the radius of the sphere (e.g., the Earth). The equitorial radius of the Earth in miles is approximately `3,963.2`. | Find the radian measurement of 50 in miles: `50 * 3963.2` | | *radians to distance(kilometers)* | Multiply the radian measure by the radius of the sphere (e.g., the Earth). The equitorial radius of the Earth in kilometers is approximately `6,378.1`. | Find the radian measurement of 50 in kilometers: `50 * 6378.1` | #### Let's Get Back to the Example! For our program, let's create a `GeoWithinCenterSphereExample()` that searches for all restaurants within a three-mile radius of Apollo Theater in Harlem: ``` cs private static void GeoWithinCenterSphereExample(IMongoCollection collection) { var builder = Builders.Filter; // Set center point to Apollo Theater in Harlem var point = GeoJson.Point(GeoJson.Position(-73.949995, 40.81009)); // Create geospatial query that searches for restaurants that fall within a 3-mile radius of Apollo Theater. // Notice how we pass our 3-mile radius parameter as radians (3 / 3963.2). This ensures accurate calculations with the $centerSphere operator. var filter = builder.GeoWithinCenterSphere(x => x.Location, point.Coordinates.X, point.Coordinates.Y, 3 / 3963.2); // Log the filter we've built to the console using our helper method. Log("$geoWithin.$centerSphere", filter); } ``` ## Next Time on Geospatial Queries in C# As we've seen, working with MongoDB geospatial queries in C# is possible through its support for the geospatial query operators. In another tutorial, we'll take a look at how to visualize our geospatial query results on a map! If you have any questions or get stuck, don't hesitate to post on our MongoDB Community Forums! And if you found this tutorial helpful, don't forget to rate it and leave any feedback. This helps us improve our articles so that they are awesome for everyone!
md
{ "tags": [ "C#" ], "pageDescription": "If you've ever glanced at a map to find the closest lunch spots to you, you've most likely used a geospatial query under the hood! In this tutorial, we'll learn how to store geospatial data in MongoDB Atlas and how to work with geospatial queries in the MongoDB C# driver.", "contentType": "Tutorial" }
MongoDB Geospatial Queries in C#
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/javascript/ehealth-example-app
created
# EHRS-Peru ## Creators Jorge Fatama Vera and Katherine Ruiz from Pontificia Universidad Católica del Perú (PUCP) contributed this project. ## About the project This is a theoretical Electronic Health Record system (EHR-S) in Peru, which uses a MongoDB cluster to store clinical information. Note: This is my (Jorge) dual-thesis project for the degree in Computer Engineering (with the role of Backend Development). The MongoDB + Spring service is hosted in the "ehrs-format" folder of the Gitlab repository, in the "develop" branch. ## Inspiration When I started this project, I didn’t know about MongoDB. I think in Peru, it’s a myth that MongoDB is only used in Data Analytics or Big Data. Few people talk about using MongoDB as their primary database. Most of the time, we use MySQL. SQL Server or Oracle. In university, we only learn about relational databases. When I looked into my project thesis and other Electronic Health Record Systems, I discovered many applications use MongoDB. So I started to investigate more, and I learned that MongoDB has many advantages as my primary database. ## Why MongoDB? We chose MongoDB for its horizontal scaling, powerful query capacity, and document flexibility. We specifically used these features to support various clinical information formats regulated by local legal regulations. When we chose MongoDB as our system's clinical information database, I hadn't much previous experience in that. During system development, I was able to identify the benefits that MongoDB offers. This motivated me to learn more about system development with MongoDB, both in programming forums and MongoDB University courses. Then, I wondered how the technological landscape would be favored with integrating NoSQL databases in information systems with potential in data mining and/or high storage capability. In the medium term, we'll see more systems developed using MongoDB as the primary database in Peru universities' projects for information systems, taking advantage of the growing spread of Big Data and Data Analytics in the Latin American region. ## How it works For this project, I’m using information systems from relational databases and non-relational databases. Because I discovered that they are not necessarily separated, they can both be convenient to use. This is a system with a microservice-oriented architecture. There is a summary of each project in the GitLab repository (each folder represents a microservice): * **ehrs-eureka**: Attention Service, which works as a server for the other microservices. * **ehrs-gateway**: Distribution Service, which works as a load balancer, which allows the use of a single port for the requests received by the system. * **ehrs-auth**: Authentication Service, which manages access to the system. * **ehrs-auditoria**: Audit Service, which performs the audit trails of the system. * **ehrs-formatos**: Formats Service, which records clinical information in the database of formats. * **ehrs-fhir under maintenance]**: FHIR Query Service, which consults the information under the HL7 FHIR standard. ## Challenges and learnings When I presented this idea to my advisor M.Sc. Angel Lena, he didn’t know about MongoDB as a support in this area. We had to make a plan to justify the use of MongoDB as the primary database. The challenge, later on, was how we could store all the different formats in one collection. At the moment, we’ve been working with the free cluster. As the program will scale and go into the deployment phase, I probably need to increase my cluster. That will be a challenge for me because the investment can be a problem. Besides that, there are not many other projects built with MongoDB in my university, and it is sometimes difficult for me to get support. To solve this problem, I’ve been working on increasing my knowledge of MongoDB. I’ve been taking classes at [MongoDB University. I’ve completed the basics course and the cluster administration course. There are not many certified MongoDB professionals in my country; only two, I believe, and I would like to become the third one. When I started working on my thesis, I didn’t imagine that I had the opportunity to share my project in this way, and I’m very excited that I can. I hope that MongoDB will work on a Student ambassador program for universities in the future. Universities still need to learn a lot about MongoDB, and it’s exciting that an ambassador program is in the works.
md
{ "tags": [ "JavaScript", "Atlas" ], "pageDescription": " EHRS PUCP, a theoretical national Electronic Health System in Peru", "contentType": "Code Example" }
EHRS-Peru
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/unit-test-atlas-serverless-functions
created
# How to Write Unit Tests for MongoDB Atlas Functions I recently built a web app for my team using Atlas Functions. I wanted to be able to iterate quickly and frequently deploy my changes. To do so, I needed to implement DevOps infrastructure that included a strong foundation of test automation. Unfortunately, I didn't know how to do any of that for apps built using Atlas Functions. In this series, I'll walk you through what I discovered. I'll share how you can build a suite of automated tests and a CI/CD pipeline for web applications that are built on serverless functions. Today, I'll explain how you can write automated unit tests for Atlas Functions. Below is a summary of what we'll cover: - About the Social Stats App - App Architecture - Serverless Architecture and Atlas App Services - Social Stats Architecture - Unit Testing Atlas Functions - Modifying Functions to be Testable - Unit Testing Self-Contained Functions - Unit Testing Functions Using Mocks - Wrapping Up > > >Prefer to learn by video? Many of the concepts I cover in this series >are available in this video. > > ## About the Social Stats App Before I jump into how I tested my app, I want to give you a little background on what the app does and how it's built. My teammates and I needed a way to track our Twitter statistics together. Twitter provides a way for their users to download Twitter statistics. The download is a comma-separated value (CSV) file that contains a row of statistics for each Tweet. If you want to try it out, navigate to and choose to export your data by Tweet. Once my teammates and I downloaded our Tweet statistics, we needed a way to regularly combine our stats without duplicating data from previous CSV files. So I decided to build a web app. The app is really light, and, to be completely honest, really ugly. The app currently consists of two pages. The first page allows anyone on our team to upload their Twitter statistics CSV file. The second page is a dashboard where we can slice and dice our data. Anyone on our team can access the dashboard to pull individual stats or grab combined stats. The dashboard is handy for both my teammates and our management chain. ## App Architecture Let's take a look at how I architected this app, so we can understand how I tested it. ### Serverless Architecture and Atlas The app is built using a serverless architecture. The term "serverless" can be a bit misleading. Serverless doesn't mean the app uses no servers. Serverless means that developers don't have to manage the servers themselves. (That's a major win in my book!) When you use a serverless architecture, you write the code for a function. The cloud provider handles executing the function on its own servers whenever the function needs to be run. Serverless architectures have big advantages over traditional, monolithic applications: - **Focus on what matters.** Developers don't have to worry about servers, containers, or infrastructure. Instead, we get to focus on the application code, which could lead to reduced development time and/or more innovation. - **Pay only for what you use.** In serverless architectures, you typically pay for the compute power you use and the data you're transferring. You don't typically pay for the servers when they are sitting idle. This can result in big cost savings. - **Scale easily.** The cloud provider handles scaling your functions. If your app goes viral, the development and operations teams don't need to stress. I've never been a fan of managing infrastructure, so I decided to build the Social Stats app using a serverless architecture. MongoDB Atlas offers several serverless cloud services – including Atlas Data API, Atlas GraphQL API, and Atlas Triggers – that make building serverless apps easy. ### Social Stats Architecture Let's take a look at how the Social Stats app is architected. Below is a flow diagram of how the pieces of the app work together. When a user wants to upload their Twitter statistics CSV file, they navigate to `index.html` in their browser. `index.html` could be hosted anywhere. I chose to host `index.html` using Static Hosting. I like the simplicity of keeping my hosted files and serverless functions in one project that is hosted on one platform. When a user chooses to upload their Twitter statistics CSV file,`index.html` encodes the CSV file and passes it to the `processCSV` Atlas Function. The `processCSV` function decodes the CSV file and passes the results to the `storeCsvInDb` Atlas Function. The `storeCsvInDb` function calls the `removeBreakingCharacters` Atlas Function that removes any emoji or other breaking characters from the data. Then the `storeCsvInDb` function converts the cleaned data to JSON (JavaScript Object Notation) documents and stores those documents in a MongoDB database hosted by Atlas. The results of storing the data in the database are passed up the function chain. The dashboard that displays the charts with the Twitter statistics is hosted by MongoDB Charts. The great thing about this dashboard is that I didn't have to do any programming to create it. I granted Charts access to my database, and then I was able to use the Charts UI to create charts with customizable filters. (Sidenote: Linking to a full Charts dashboard worked fine for my app, but I know that isn't always ideal. Charts also allows you to embed individual charts in your app through an iframe or SDK.) ## Unit Testing Atlas Functions Now that I've explained what I had to test, let's explore how I tested it. Today, we'll talk about the tests that form the base of the testing pyramid:unit tests. Unit tests are designed to test the small units of your application. In this case, the units we want to test are serverless functions. Unit tests should have a clear input and output. They should not test how the units interact with each other. Unit tests are valuable because they: 1. Are typically faster to write than other automated tests. 2. Can be executed quickly and independently as they do not rely on other integrations and systems. 3. Reveal bugs early in the software development lifecycle when they are cheapest to fix. 4. Give developers confidence we aren't introducing regressions as we update and refactor other parts of the code. Many JavaScript testing frameworks exist. I chose to use Jest for building my unit tests as it's a popular choice in the JavaScript community. The examples below use Jest, but you can apply the principles described in the examples below to any testing framework. ### Modifying Atlas Functions to be Testable Every Atlas Function assigns a function to the global variable `exports`. Below is the code for a boilerplate Function that returns `"Hello, world!"` ``` javascript exports = function() { return "Hello, world!"; }; ``` This function format is problematic for unit testing: calling this function from another JavaScript file is impossible. To workaround this problem, we can add the following three lines to the bottom of Function source files: ``` javascript if (typeof module === 'object') { module.exports = exports; } ``` Let's break down what's happening here. If the type of the module is an`object`, the function is being executed outside of an Atlas environment, so we need to assign our function (stored in `exports`) to `module.exports`. If the type of the module is not an `object`, we can safely assume the function is being executed in a Atlas environment, so we don't need to do anything special. Once we've added these three lines to our serverless functions, we are ready to start writing unit tests. ### Unit Testing Self-Contained Functions Unit testing functions is easiest when the functions are self-contained, meaning that the functions don't call any other functions or utilize any services like a database. So let's start there. Let's begin by testing the `removeBreakingCharacters` function. This function removes emoji and other breaking characters from the Twitter statistics. Below is the source code for the `removeBreakingCharacters` function. ``` javascript exports = function (csvTweets) { csvTweets = csvTweets.replace(/^a-zA-Z0-9\, "\/\\\n\`~!@#$%^&*()\-_—+=[\]{}|:;\'"<>,.?/']/g, ''); return csvTweets; }; if (typeof module === 'object') { module.exports = exports; } ``` To test this function, I created a new test file named `removeBreakingCharacters.test.js`. I began by importing the`removeBreakingCharacters` function. ``` javascript const removeBreakingCharacters = require('../../../functions/removeBreakingCharacters/source.js'); ``` Next I imported several constants from [constants.js. Each constant represents a row of data in a Twitter statistics CSV file. ``` javascript const { header, validTweetCsv, emojiTweetCsv, emojiTweetCsvClean, specialCharactersTweetCsv } = require('../../constants.js'); ``` Then I was ready to begin testing. I began with the simplest case: a single valid Tweet. ``` javascript test('SingleValidTweet', () => { const csv = header + "\n" + validTweetCsv; expect(removeBreakingCharacters(csv)).toBe(csv); }) ``` The `SingleValidTweet` test creates a constant named `csv`. `csv` is a combination of a valid header, a new line character, and a valid Tweet. Since the Tweet is valid, `removeBreakingCharacters` shouldn't remove any characters. The test checks that when `csv` is passed to the `removeBreakingCharacters` function, the function returns a String equal to `csv`. Emojis were a big problem that were breaking my app, so I decided to create a test just for them. ``` javascript test('EmojiTweet', () => { const csvBefore = header + "\n" + emojiTweetCsv; const csvAfter = header + "\n" + emojiTweetCsvClean; expect(removeBreakingCharacters(csvBefore)).toBe(csvAfter); }) ``` The `EmojiTweet` test creates two constants: - `csvBefore` stores a valid header, a new line character, and stats about a Tweet that contains three emoji. - `csvAfter` stores the same valid header, a new line character, and stats about the same Tweet except the three emojis have been removed. The test then checks that when I pass the `csvBefore` constant to the `removeBreakingCharacters` function, the function returns a String equal to `csvAfter`. I created other unit tests for the `removeBreakingCharacters` function. You can find the complete set of unit tests in removeBreakingCharacters.test.js. ### Unit Testing Functions Using Mocks Unfortunately, unit testing most serverless functions will not be as straightforward as the example above. Serverless functions tend to rely on other functions and services. The goal of unit testing is to test individual units—not how the units interact with each other. When a function relies on another function or service, we can simulate the function or service with a mock object. Mock objects allow developers to "mock" what a function or service is doing. The mocks allows us to test individual units. Let's take a look at how I tested the `storeCsvInDb` function. Below is the source code for the function. ``` javascript exports = async function (csvTweets) { const CSV = require("comma-separated-values"); csvTweets = context.functions.execute("removeBreakingCharacters", csvTweets); // Convert the CSV Tweets to JSON Tweets jsonTweets = new CSV(csvTweets, { header: true }).parse(); // Prepare the results object that we will return var results = { newTweets: ], updatedTweets: [], tweetsNotInsertedOrUpdated: [] } // Clean each Tweet and store it in the DB jsonTweets.forEach(async (tweet) => { // The Tweet ID from the CSV is being rounded, so we'll manually pull it out of the Tweet link instead delete tweet["Tweet id"]; // Pull the author and Tweet id out of the Tweet permalink const link = tweet["Tweet permalink"]; const pattern = /https?:\/\/twitter.com\/([^\/]+)\/status\/(.*)/i; const regexResults = pattern.exec(link); tweet.author = regexResults[1]; tweet._id = regexResults[2] // Generate a date from the time string tweet.date = new Date(tweet.time.substring(0, 10)); // Upsert the Tweet, so we can update stats for existing Tweets const result = await context.services.get("mongodb-atlas").db("TwitterStats").collection("stats").updateOne( { _id: tweet._id }, { $set: tweet }, { upsert: true }); if (result.upsertedId) { results.newTweets.push(tweet._id); } else if (result.modifiedCount > 0) { results.updatedTweets.push(tweet._id); } else { results.tweetsNotInsertedOrUpdated.push(tweet._id); } }); return results; }; if (typeof module === 'object') { module.exports = exports; } ``` At a high level, the `storeCsvInDb` function is doing the following: - Calling the `removeBreakingCharacters` function to remove breaking characters. - Converting the Tweets in the CSV to JSON documents. - Looping through the JSON documents to clean and store each one in the database. - Returning an object that contains a list of Tweets that were inserted, updated, or unable to be inserted or updated. To unit test this function, I created a new file named `storeCsvInDB.test.js`. The top of the file is very similar to the top of `removeBreakingCharacters.test.js`: I imported the function I wanted to test and imported constants. ``` javascript const storeCsvInDb = require('../../../functions/storeCsvInDb/source.js'); const { header, validTweetCsv, validTweetJson, validTweetId, validTweet2Csv, validTweet2Id, validTweet2Json, validTweetKenId, validTweetKenCsv, validTweetKenJson } = require('../../constants.js'); ``` Then I began creating mocks. The function interacts with the database, so I knew I needed to create mocks to support those interactions. The function also calls the `removeBreakingCharacters` function, so I created a mock for that as well. I added the following code to `storeCsvInDB.test.js`. ``` javascript let updateOne; beforeEach(() => { // Mock functions to support context.services.get().db().collection().updateOne() updateOne = jest.fn(() => { return result = { upsertedId: validTweetId } }); const collection = jest.fn().mockReturnValue({ updateOne }); const db = jest.fn().mockReturnValue({ collection }); const get = jest.fn().mockReturnValue({ db }); collection.updateOne = updateOne; db.collection = collection; get.db = db; // Mock the removeBreakingCharacters function to return whatever is passed to it // Setup global.context.services global.context = { functions: { execute: jest.fn((functionName, csvTweets) => { return csvTweets; }) }, services: { get } } }); ``` Jest runs the [beforeEach function before each test in the given file. I chose to put the instantiation of the mocks inside of `beforeEach` so that I could add checks for how many times a particular mock is called in a given test case. Putting mocks inside of `beforeEach` can also be handy when we want to change what the mock returns the first time it is called versus the second. Once I had created my mocks, I was ready to begin testing. I created a test for the simplest case: a single tweet. ``` javascript test('Single tweet', async () => { const csvTweets = header + "\n" + validTweetCsv; expect(await storeCsvInDb(csvTweets)).toStrictEqual({ newTweets: validTweetId], tweetsNotInsertedOrUpdated: [], updatedTweets: [] }); expect(context.functions.execute).toHaveBeenCalledWith("removeBreakingCharacters", csvTweets); expect(context.services.get.db.collection.updateOne).toHaveBeenCalledWith( { _id: validTweetId }, { $set: validTweetJson }, { upsert: true }); }) ``` Let's walk through what this test is doing. Just as we saw in earlier tests in this post, I began by creating a constant to represent the CSV Tweets. `csvTweets` consists of a valid header, a newline character, and a valid Tweet. The test then calls the `storeCsvInDb` function, passing the `csvTweets` constant. The test asserts that the function returns an object that shows that the Tweet we passed was successfully stored in the database. Next, the test checks that the mock of the `removeBreakingCharacters` function was called with our `csvTweets` constant. Finally, the test checks that the database's `updateOne` function was called with the arguments we expect. After I finished this unit test, I wrote an additional test that checks the `storeCsvInDb` function correctly handles multiple Tweets. You can find the complete set of unit tests in [storeCsvInDB.test.js. ## Wrapping Up Unit tests can be incredibly valuable. They are one of the best ways to find bugs early in the software development lifecycle. They also lay a strong foundation for CI/CD. Keep in mind the following two tips as you write unit tests for Atlas Functions: - Modify the module exports in the source file of each Function, so you will be able to call the Functions from your test files. - Use mocks to simulate interactions with other functions, databases, and other services. The Social Stats application source code and associated test files are available in a GitHub repo: . The repo's readme has detailed instructions on how to execute the test files. Be on the lookout for the next post in this series where I'll walk you through how to write integration tests for serverless apps. ## Related Links Check out the following resources for more information: - GitHub Repository: Social Stats - Video: DevOps + Atlas Functions = 😍 - Documentation: MongoDB Atlas App Services - MongoDB Atlas - MongoDB Charts
md
{ "tags": [ "Atlas", "JavaScript", "Serverless" ], "pageDescription": "Learn how to write unit tests for MongoDB Atlas Functions.", "contentType": "Tutorial" }
How to Write Unit Tests for MongoDB Atlas Functions
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/python/python-acid-transactions
created
# Introduction to Multi-Document ACID Transactions in Python ## Introduction Multi-document transactions arrived in MongoDB 4.0 in June 2018. MongoDB has always been transactional around updates to a single document. Now, with multi-document ACID transactions we can wrap a set of database operations inside a start and commit transaction call. This ensures that even with inserts and/or updates happening across multiple collections and/or databases, the external view of the data meets ACID constraints. To demonstrate transactions in the wild we use a trivial example app that emulates a flight booking for an online airline application. In this simplified booking we need to undertake three operations: - Allocate a seat in the `seat_collection` - Pay for the seat in the `payment_collection` - Update the count of allocated seats and sales in the `audit_collection` For this application we will use three separate collections for these documents as detailed above. The code in `transactions_main.py` updates these collections in serial unless the `--usetxns argument` is used. We then wrap the complete set of operations inside an ACID transaction. The code in `transactions_main.py` is built directly using the MongoDB Python driver (Pymongo 3.7.1). The goal of this code is to demonstrate to the Python developers just how easy it is to covert existing code to transactions if required or to port older SQL based systems. ## Setting up your environment The following files can be found in the associated github repo, pymongo-transactions. - `gitignore` : Standard Github .gitignore for Python. - `LICENSE` : Apache's 2.0 (standard Github) license. - `Makefile` : Makefile with targets for default operations. - `transaction_main.py` : Run a set of writes with and without transactions. Run python `transactions_main.py -h` for help. - `transactions_retry.py` : The file containing the transactions retry functions. - `watch_transactions.py` : Use a MongoDB change stream to watch collections as they change when transactions_main.py is running. - `kill_primary.py` : Starts a MongoDB replica set (on port 7100) and kills the primary on a regular basis. This is used to emulate an election happening in the middle of a transaction. - `featurecompatibility.py` : check and/or set feature compatibility for the database (it needs to be set to "4.0" for transactions). You can clone this repo and work alongside us during this blog post (please file any problems on the Issues tab in Github). We assume for all that follows that you have Python 3.6 or greater correctly installed and on your path. The Makefile outlines the operations that are required to setup the test environment. All the programs in this example use a port range starting at **27100** to ensure that this example does not clash with an existing MongoDB installation. ## Preparation To setup the environment you can run through the following steps manually. People that have `make` can speed up installation by using the `make install` command. ### Set a python virtualenv Check out the doc for virtualenv. ``` bash $ cd pymongo-transactions $ virtualenv -p python3 venv $ source venv/bin/activate ``` ### Install Python MongoDB Driver pymongo Install the latest version of the PyMongo MongoDB Driver (3.7.1 at the time of writing). ``` bash pip install --upgrade pymongo ``` ### Install mtools mtools is a collection of helper scripts to parse, filter, and visualize MongoDB log files (mongod, mongos). mtools also includes `mlaunch`, a utility to quickly set up complex MongoDB test environments on a local machine. For this demo we are only going to use the mlaunch program. ``` bash pip install mtools ``` The `mlaunch` program also requires the psutil package. ``` bash pip install psutil ``` The `mlaunch` program gives us a simple command to start a MongoDB replica set as transactions are only supported on a replica set. Start a replica set whose name is **txntest**. See the `make init_server` make target for details: ``` bash mlaunch init --port 27100 --replicaset --name "txntest" ``` ### Using the Makefile for configuration There is a `Makefile` with targets for all these operations. For those of you on platforms without access to Make, it should be easy enough to cut and paste the commands out of the targets and run them on the command line. Running the `Makefile`: ``` bash $ cd pymongo-transactions $ make ``` You will need to have MongoDB 4.0 on your path. There are other convenience targets for starting the demo programs: - `make notxns` : start the transactions client without using transactions. - `make usetxns` : start the transactions client with transactions enabled. - `make watch_seats` : watch the seats collection changing. - `make watch_payments` : watch the payment collection changing. ## Running the transactions example The transactions example consists of two python programs. - `transaction_main.py`, - `watch_transactions.py`. ### Running transactions_main.py ``` none $ python transaction_main.py -h usage: transaction_main.py -h] [--host HOST] [--usetxns] [--delay DELAY] [--iterations ITERATIONS] [--randdelay RANDDELAY RANDDELAY] optional arguments: -h, --help show this help message and exit --host HOST MongoDB URI [default: mongodb://localhost:27100,localh ost:27101,localhost:27102/?replicaSet=txntest&retryWri tes=true] --usetxns Use transactions [default: False] --delay DELAY Delay between two insertion events [default: 1.0] --iterations ITERATIONS Run N iterations. O means run forever --randdelay RANDDELAY RANDDELAY Create a delay set randomly between the two bounds [default: None] ``` You can choose to use `--delay` or `--randdelay`. If you use both --delay takes precedence. The `--randdelay` parameter creates a random delay between a lower and an upper bound that will be added between each insertion event. The `transactions_main.py` program knows to use the **txntest** replica set and the right default port range. To run the program without transactions you can run it with no arguments: ``` none $ python transaction_main.py using collection: SEATSDB.seats using collection: PAYMENTSDB.payments using collection: AUDITDB.audit Using a fixed delay of 1.0 1. Booking seat: '1A' 1. Sleeping: 1.000 1. Paying 330 for seat '1A' 2. Booking seat: '2A' 2. Sleeping: 1.000 2. Paying 450 for seat '2A' 3. Booking seat: '3A' 3. Sleeping: 1.000 3. Paying 490 for seat '3A' 4. Booking seat: '4A' 4. Sleeping: 1.000 ``` The program runs a function called `book_seat()` which books a seat on a plane by adding documents to three collections. First it adds the seat allocation to the `seats_collection`, then it adds a payment to the `payments_collection`, finally it updates an audit count in the `audit_collection`. (This is a much simplified booking process used purely for illustration). The default is to run the program **without** using transactions. To use transactions we have to add the command line flag `--usetxns`. Run this to test that you are running MongoDB 4.0 and that the correct [featureCompatibility is configured (it must be set to 4.0). If you install MongoDB 4.0 over an existing `/data` directory containing 3.6 databases then featureCompatibility will be set to 3.6 by default and transactions will not be available. > > >Note: If you get the following error running python `transaction_main.py --usetxns` that means you are picking up an older version of pymongo (older than 3.7.x) for which there is no multi-document transactions support. > > ``` none Traceback (most recent call last): File "transaction_main.py", line 175, in total_delay = total_delay + run_transaction_with_retry( booking_functor, session) File "/Users/jdrumgoole/GIT/pymongo-transactions/transaction_retry.py", line 52, in run_transaction_with_retry with session.start_transaction(): AttributeError: 'ClientSession' object has no attribute 'start_transaction' ``` ## Watching Transactions To actually see the effect of transactions we need to watch what is happening inside the collections `SEATSDB.seats` and `PAYMENTSDB.payments`. We can do this with `watch_transactions.py`. This script uses MongoDB Change Streams to see what's happening inside a collection in real-time. We need to run two of these in parallel so it's best to line them up side by side. Here is the `watch_transactions.py` program: ``` none $ python watch_transactions.py -h usage: watch_transactions.py -h] [--host HOST] [--collection COLLECTION] optional arguments: -h, --help show this help message and exit --host HOST mongodb URI for connecting to server [default: mongodb://localhost:27100/?replicaSet=txntest] --collection COLLECTION Watch [default: PYTHON_TXNS_EXAMPLE.seats_collection] ``` We need to watch each collection so in two separate terminal windows start the watcher. Window 1: ``` none $ python watch_transactions.py --watch seats Watching: seats ... ``` Window 2: ``` none $ python watch_transactions.py --watch payments Watching: payments ... ``` ## What happens when you run without transactions? Lets run the code without transactions first. If you examine the `transaction_main.py` code you will see a function `book_seats`. ``` python def book_seat(seats, payments, audit, seat_no, delay_range, session=None): ''' Run two inserts in sequence. If session is not None we are in a transaction :param seats: seats collection :param payments: payments collection :param seat_no: the number of the seat to be booked (defaults to row A) :param delay_range: A tuple indicating a random delay between two ranges or a single float fixed delay :param session: Session object required by a MongoDB transaction :return: the delay_period for this transaction ''' price = random.randrange(200, 500, 10) if type(delay_range) == tuple: delay_period = random.uniform(delay_range[0], delay_range[1]) else: delay_period = delay_range # Book Seat seat_str = "{}A".format(seat_no) print(count( i, "Booking seat: '{}'".format(seat_str))) seats.insert_one({"flight_no" : "EI178", "seat" : seat_str, "date" : datetime.datetime.utcnow()}, session=session) print(count( seat_no, "Sleeping: {:02.3f}".format(delay_period))) #pay for seat time.sleep(delay_period) payments.insert_one({"flight_no" : "EI178", "seat" : seat_str, "date" : datetime.datetime.utcnow(), "price" : price}, session=session) audit.update_one({ "audit" : "seats"}, { "$inc" : { "count" : 1}}, upsert=True) print(count(seat_no, "Paying {} for seat '{}'".format(price, seat_str))) return delay_period ``` This program emulates a very simplified airline booking with a seat being allocated and then paid for. These are often separated by a reasonable time frame (e.g. seat allocation vs external credit card validation and anti-fraud check) and we emulate this by inserting a delay. The default is 1 second. Now with the two `watch_transactions.py` scripts running for `seats_collection` and `payments_collection` we can run `transactions_main.py` as follows: ``` bash $ python transaction_main.py ``` The first run is with no transactions enabled. The bottom window shows `transactions_main.py` running. On the top left we are watching the inserts to the seats collection. On the top right we are watching inserts to the payments collection. ![watching without transactions We can see that the payments window lags the seats window as the watchers only update when the insert is complete. Thus seats sold cannot be easily reconciled with corresponding payments. If after the third seat has been booked we CTRL-C the program we can see that the program exits before writing the payment. This is reflected in the Change Stream for the payments collection which only shows payments for seat 1A and 2A versus seat allocations for 1A, 2A and 3A. If we want payments and seats to be instantly reconcilable and consistent we must execute the inserts inside a transaction. ## What happens when you run with Transactions? Now lets run the same system with `--usetxns` enabled. ``` bash $ python transaction_main.py --usetxns ``` We run with the exact same setup but now set `--usetxns`. Note now how the change streams are interlocked and are updated in parallel. This is because all the updates only become visible when the transaction is committed. Note how we aborted the third transaction by hitting CTRL-C. Now neither the seat nor the payment appear in the change streams unlike the first example where the seat went through. This is where transactions shine in world where all or nothing is the watchword. We never want to keeps seats allocated unless they are paid for. ## What happens during failure? In a MongoDB replica set all writes are directed to the Primary node. If the primary node fails or becomes inaccessible (e.g. due to a network partition) writes in flight may fail. In a non-transactional scenario the driver will recover from a single failure and retry the write. In a multi-document transaction we must recover and retry in the event of these kinds of transient failures. This code is encapsulated in `transaction_retry.py`. We both retry the transaction and retry the commit to handle scenarios where the primary fails within the transaction and/or the commit operation. ``` python def commit_with_retry(session): while True: try: # Commit uses write concern set at transaction start. session.commit_transaction() print("Transaction committed.") break except (pymongo.errors.ConnectionFailure, pymongo.errors.OperationFailure) as exc: # Can retry commit if exc.has_error_label("UnknownTransactionCommitResult"): print("UnknownTransactionCommitResult, retrying " "commit operation ...") continue else: print("Error during commit ...") raise def run_transaction_with_retry(functor, session): assert (isinstance(functor, Transaction_Functor)) while True: try: with session.start_transaction(): result=functor(session) # performs transaction commit_with_retry(session) break except (pymongo.errors.ConnectionFailure, pymongo.errors.OperationFailure) as exc: # If transient error, retry the whole transaction if exc.has_error_label("TransientTransactionError"): print("TransientTransactionError, retrying " "transaction ...") continue else: raise return result ``` In order to observe what happens during elections we can use the script `kill_primary.py`. This script will start a replica-set and continuously kill the primary. ``` none $ make kill_primary . venv/bin/activate && python kill_primary.py no nodes started. Current electionTimeoutMillis: 500 1. (Re)starting replica-set no nodes started. 1. Getting list of mongod processes Process list written to mlaunch.procs 1. Getting replica set status 1. Killing primary node: 31029 1. Sleeping: 1.0 2. (Re)starting replica-set launching: "/usr/local/mongodb/bin/mongod" on port 27101 2. Getting list of mongod processes Process list written to mlaunch.procs 2. Getting replica set status 2. Killing primary node: 31045 2. Sleeping: 1.0 3. (Re)starting replica-set launching: "/usr/local/mongodb/bin/mongod" on port 27102 3. Getting list of mongod processes Process list written to mlaunch.procs 3. Getting replica set status 3. Killing primary node: 31137 3. Sleeping: 1.0 ``` `kill_primary.py` resets electionTimeOutMillis to 500ms from its default of 10000ms (10 seconds). This allows elections to resolve more quickly for the purposes of this test as we are running everything locally. Once `kill_primary.py` is running we can start up `transactions_main.py` again using the `--usetxns` argument. ``` none $ make usetxns . venv/bin/activate && python transaction_main.py --usetxns Forcing collection creation (you can't create collections inside a txn) Collections created using collection: PYTHON_TXNS_EXAMPLE.seats using collection: PYTHON_TXNS_EXAMPLE.payments using collection: PYTHON_TXNS_EXAMPLE.audit Using a fixed delay of 1.0 Using transactions 1. Booking seat: '1A' 1. Sleeping: 1.000 1. Paying 440 for seat '1A' Transaction committed. 2. Booking seat: '2A' 2. Sleeping: 1.000 2. Paying 330 for seat '2A' Transaction committed. 3. Booking seat: '3A' 3. Sleeping: 1.000 TransientTransactionError, retrying transaction ... 3. Booking seat: '3A' 3. Sleeping: 1.000 3. Paying 240 for seat '3A' Transaction committed. 4. Booking seat: '4A' 4. Sleeping: 1.000 4. Paying 410 for seat '4A' Transaction committed. 5. Booking seat: '5A' 5. Sleeping: 1.000 5. Paying 260 for seat '5A' Transaction committed. 6. Booking seat: '6A' 6. Sleeping: 1.000 TransientTransactionError, retrying transaction ... 6. Booking seat: '6A' 6. Sleeping: 1.000 6. Paying 380 for seat '6A' Transaction committed. ... ``` As you can see during elections the transaction will be aborted and must be retried. If you look at the `transaction_rety.py` code you will see how this happens. If a write operation encounters an error it will throw one of the following exceptions: - pymongo.errors.ConnectionFailure - pymongo.errors.OperationFailure Within these exceptions there will be a label called TransientTransactionError. This label can be detected using the `has_error_label(label)` function which is available in pymongo 3.7.x. Transient errors can be recovered from and the retry code in `transactions_retry.py` has code that retries for both writes and commits (see above). ## Conclusion Multi-document transactions are the final piece of the jigsaw for SQL developers who have been shying away from trying MongoDB. ACID transactions make the programmer's job easier and give teams that are migrating from an existing SQL schema a much more consistent and convenient transition path. As most migrations involving a move from highly normalised data structures to more natural and flexible nested JSON documents one would expect that the number of required multi-document transactions will be less in a properly constructed MongoDB application. But where multi-document transactions are required programmers can now include them using very similar syntax to SQL. With ACID transactions in MongoDB 4.0 it can now be the first choice for an even broader range of application use cases. >If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post. To try it locally download MongoDB 4.0.
md
{ "tags": [ "Python", "MongoDB" ], "pageDescription": "How to perform multi-document transactions with Python.", "contentType": "Quickstart" }
Introduction to Multi-Document ACID Transactions in Python
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/mongodb-podcast-doug-eck-google-brain
created
# At the Intersection of AI/ML and HCI with Douglas Eck of Google (MongoDB Podcast) Doug Eck is a principal scientist at Google and a research director on the Brain Team. He created the ongoing research project, Magenta, which focuses on the role of machine learning in the process of creating art and music. He is joining Anaiya Raisinghani, Michael Lynn, and Nic Raboy today to discuss all things artificial intelligence, machine learning, and to give us some insight into his role at Google. We are going to be diving head first into HCI (Human Computer Interaction), Google’s new GPT-3 language model, and discussing some of the hard issues with combining databases and deep learning. With all the hype surrounding AI, you may have some questions as to its past and potential future, so stay tuned to hear from one of Google’s best. :youtube]{vid=Wge-1tcRQco} *Doug Eck* :[00:00:00] Hi everybody. My name is Doug Eck and welcome to the MongoDB podcast. *Michael Lynn* : [00:00:08] Welcome to the show. Today we're talking with [Doug Eck. He's a principal scientist at Google and a research director on the Brain Team. He also created and helps lead the Magenta team, an ongoing research project exploring the role of machine learning and the process of creating art and music. Today's episode was produced and the interview was led by Anaiya Raisinghani She's a summer intern here at MongoDB. She's doing a fantastic job. I hope you enjoy this episode. We've got a couple of guests today and our first guest is a summer intern at MongoDB. *Anaiya Raisinghani* : 00:00:55] Hi everyone. My name is [Anaiya Raisinghani and I am the developer advocacy intern here at MongoDB. *Michael Lynn* : 00:01:01] Well, welcome to the show. It's great to have you on the podcast. Before we begin, why don't you tell the folks a little bit about yourself? *Anaiya Raisinghani* : [00:01:08] Yeah, of course. I'm from the Bay Area. I grew up here and I go to school in LA at the [University of Southern California. My undergrad degree is in Computational Linguistics, which is half CS, half linguistics. And I want to say my overall interest in artificial intelligence, really came from the cool classes I have the unique opportunity to take, like speech recognition, natural language processing, and just being able to use machine learning libraries like TensorFlow in some of my school projects. So I feel very lucky to have had an early exposure to AI than most. *Michael Lynn* : 00:01:42] Well, great. And I understand that you brought a guest with you today. Do you want to talk a little bit about who that is and what we're going to discuss today? *Anaiya Raisinghani* : [00:01:48] Yes, definitely. So today we have a very, very special guest Doug Eck, who is a principal scientist at Google, a research director on the Brain Team and the creator of Magenta, so today we're going to be chatting about machine learning, AI, and some other fun topics. Thank you so much, Doug, for being here today. *Doug Eck* :[00:02:07] I'm very happy to be here, Anaiya. *Michael Lynn* : [00:02:08] Well, Doug, it's great to have you on the show. Thanks so much for taking the time to talk with us. And at this point, I kind of want to turn it over to Anaiya. She's got some prepared questions. This is kind of her field of study, and she's got some passion and interest around it. So we're going to get into some really interesting topics in the machine learning space. And Anaiya, I'll turn it over to you. *Anaiya Raisinghani* : [00:02:30] Perfect. Thank you so much, Mike. Just to get us started, Doug, could you give us a little background about what you do at Google? *Doug Eck* :[00:02:36] Sure, thanks, Anaiya. Well, right now in my career, I go to a lot of meetings. By that, I mean I'm running a large team of researchers on the Google brain team, and I'm trying to help keep things going. Sometimes it feels like herding cats because we hire very talented and very self motivated researchers who are doing fundamental research in machine learning. Going back a bit, I've been doing something like this, God, it's terrifying to think about, but almost 30 years. In a previous life when I was young, like you Anaiya, I was playing a lot of music, playing guitar. I was an English major as an undergrad, doing a lot of writing and I just kept getting drawn into technology. And once I finished my undergrad, I worked as a database programmer. Well, well, well before MongoDB. And, uh, I did that for a few years and really enjoyed it. And then I decided that my passion was somewhere in the overlap between music and artificial intelligence. And at that point in my life, I'm not sure I could have provided a crisp definition of artificial intelligence, but I knew I wanted to do it. I wanted to see if we can make intelligent computers help us make music. And so I made my way back into grad school. Somehow I tricked a computer science department into letting an English major do a PhD in computer science with a lot of extra math. And, uh, I made my way into an area of AI called machine learning, where our goal is to build computer programs that learn to solve problems, rather than kind of trying to write down the recipe ourselves. And for the last 20 years, I've been active in machine learning as a post-doc doing a post-doctoral fellowship in Switzerland. And then I moved to Canada and became a professor there and worked with some great people at the University of Montreal, just like changing my career every, every few years. So, uh, after seven years there, I switched and came to California and became a research scientist at Google. And I've been very happily working here at Google, uh, ever since for 11 years, I feel really lucky to have had a chance to be part of the growth and the, I guess, Renaissance of neural networks and machine learning across a number of really important disciplines and to have been part of spearheading a bit of interest in AI and creativity. *Anaiya Raisinghani* : [00:04:45] That's great. Thank you so much. So there's currently a lot of hype around just AI in general and machine learning, but for some of our listeners who may not know what it is, how would you describe it in the way that you understand it? *Doug Eck* :[00:04:56] I was afraid you were going to ask that because I said, you know, 30 years ago, I couldn't have given you a crisp definition of AI and I'm not sure I can now without resorting to Wikipedia and cheating, I would define artificial intelligence as the task of building software that behaves intelligently. And traditionally there have been two basic approaches to AI in the past, in the distant past, in the eighties and nineties, we called this neat versus scruffy. Where neat was the idea of writing down sets of rules, writing down a recipe that defined complex behavior like translate a translation maybe, or writing a book, and then having computer programs that can execute those rules. Contrast that with scruffy scruffy, because it's a bit messier. Um, instead of thinking we know the rules, instead we build programs that can examine data can look at large data sets. Sometimes datasets that have labels, like this is a picture, this is a picture of an orangutan. This is a picture of a banana, et cetera, and learn the relationship between those labels and that data. And that's a kind of machine learning where our goal is to help the machine learn, to solve a problem, as opposed to building in the answer. And long-term at least the current evidence where we are right now in 2021, is that for many, many hard tasks, probably most of them it's better to teach the machine how to learn rather than to try to provide the solution to the problem. And so that's how I would define a machine learning is writing software that learns to solve problems by processing information like data sets, uh, what might come out of a camera, what might come out of a microphone. And then learn to leverage what it's learned from that data, uh, to solve specific sub problems like translation or, or labeling, or you pick it. There are thousands of possible examples. *Anaiya Raisinghani* : [00:06:51] That's awesome. Thank you so much. So I also wanted to ask, because you said from 30 years ago, you wouldn't have known that definition. What has it been like to see how machine learning has improved over the years? Especially now from an inside perspective at Google. *Doug Eck* :[00:07:07] I think I've consistently underestimated how fast we can move. Perhaps that's human nature. I noticed a statistic that, this isn't about machine learning, but something less than 70 years, 60, 61 years passed between the first flight, the Wright brothers and landing on the moon. And like 60 years, isn't very long. That's pretty shocking how fast we moved. And so I guess it shouldn't be in retrospect, a surprise that we've, we've moved so fast. I did a retrospective where I'm looking at the quality of image generation. I'm sure all of you have seen these hyper-realistic faces that are not really faces, or maybe you've heard some very realistic sounding music, or you've seen a machine learning algorithm able to generate really realistic text, and this was all happening. You know, in the last five years, really, I mean, the work has been there and the ideas have been there and the efforts have been there for at least two decades, but somehow I think the combination of scale, so having very large datasets and also processing power, having large or one large computer or many coupled computers, usually running a GPU is basically, or TPU is what you think of as a video card, giving us the processing power to scale much more information. And, uh, I don't know. It's been really fun. I mean, every year I'm surprised I get up in the morning on Monday morning and I don't dread going to work, which makes me feel extremely lucky. And, uh, I'm really proud of the work that we've done at Google, but I'm really proud of what what's happened in the entire research community. *Michael Lynn* : [00:08:40] So Doug, I want to ask you and you kind of alluded to it, but I'm curious about the advances that we've made. And I realize we are very much standing on the shoulders of giants and the exponential rate at which we increase in the advances. I'm curious from your perspective, whether you think that's software or hardware and maybe what, you know, what's your perspective on both of those avenues that we're advancing in. *Doug Eck* :[00:09:08] I think it's a trade off. It's a very clear trade off. When you have slow hardware or not enough hardware, then you need to be much, much more clever with your software. So arguably the, the models, the approaches that we were using in the late 1990s, if you like terminology, if your crowd likes buzzwords support, vector machines, random forests, boosting, these are all especially SVM support vector machines are all relatively complicated. There's a lot of machinery there. And for very small data sets and for limited processing power, they can outperform simpler approaches, a simpler approach, it may not sound simple because it's got a fancy name, a neural network, the underlying mechanism is actually quite simple and it's all about having a very simple rule to update a few numbers. We call them parameters, or maybe we call them weights and neural networks don't work all that well for small datasets and for small neural networks compared to other solutions. So in the 1980s and 1990s, it looked like they weren't really very good. If you scale these up and you run a simple, very simple neural network on with a lot of weights, a lot of parameters that you can adjust, and you have a lot of data allowing the model to have some information, to really grab onto they work astonishingly well, and they seem to keep working better and better as you make the datasets larger and you add more processing power. And that could be because they're simple. There's an argument to be made there that there's something so simple that it scales to different data sets, sizes and different, different processing power. We can talk about calculus, if you want. We can dive into the chain rule. It's only two applications on the chain rule to get to backprop. *Michael Lynn* : [00:10:51] I appreciate your perspective. I do want to ask one more question about, you know, we've all come from this conventional digital, you know, binary based computing background and fascinating things are happening in the quantum space. I'm curious, you know, is there anything happening at Google that you can talk about in that space? *Doug Eck* :[00:11:11] Well, absolutely. We have. So first caveat, I am not an expert in quantum. We have a top tier quantum group down in Santa Barbara and they have made a couple of. It had been making great progress all along a couple of breakthroughs last year, my understanding of the situation that there's a certain class of problems that are extraordinarily difficult to solve with the traditional computer, but which a quantum computer will solve relatively easily. And that in fact, some of these core problems can form the basis for solving a much broader class of problems if you kind of rewrite these other problems as one of these core problems, like factorizing prime numbers, et cetera. And I have to admit, I am just simply not a quantum expert. I'm as fascinated about it as you are, we're invested. I think the big question mark is whether the class of problems that matter to us is big enough to warrant the investment and basically I've underestimated every other technological revolution. Right. You know, like I didn't think we'd get to where we are now. So I guess, you know, my skepticism about quantum is just, this is my personality, but I'm super excited about what it could be. It's also, you know, possible that we'll be in a situation where Quantum yield some breakthroughs that provides us with some challenges, especially with respect to security and cryptography. If we find new ways to solve massive problems that lead indirectly for us to be able to crack cryptographic puzzles. But if there's any quantum folks in the audience and you're shrugging your shoulders and be like, this guy doesn't know what he's talking about. This guy admits he doesn't really know what he's talking about. *Michael Lynn* : [00:12:44] I appreciate that. So I kind of derailed the conversation Anaiya, you can pick back up if you like. *Anaiya Raisinghani* : [00:12:51] Perfect. Thank you. Um, I wanted to ask you a little bit about HCI which is human computer interaction and what you do in that space. So a lot of people may not have heard about human computer interaction and the listeners. I can get like a little bit of a background if you guys would like, so it's really just a field that focuses on the design of computer technology and the way that humans and computers interact. And I feel like when people think about artificial intelligence, the first thing that they think about are, you know, robots or big spaces. So I wanted to ask you with what you've been doing at Google. Do you believe that machine learning can really help advance human computer interaction and the way that human beings and machines interact ethically? *Doug Eck* :[00:13:36] Thank you for that. That's an amazingly important question. So first a bit of a preface. I think we've made a fairly serious error in how we talk about AI and machine learning. And specifically I'm really turned off by the personification of AI. Like the AI is going to come and get you, right? Like it's a conscious thing that has volition and wants to help you or hurt you. And this link with AI and robotics, and I'm very skeptical of this sort of techno-utopian folks who believe that we can solve all problems in the world by building a sentient AI. Like there are a lot of real problems in front of us to solve. And I think we can use technology to help help us solve them. But I'm much more interested in solving the problems that are right in front of us, on the planet, rather than thinking about super intelligence or AGI, which is artificial general intelligence, meaning something smarter than us. So what does this mean for HCI human computer interaction? I believe fundamentally. We use technology to help us solve problems. We always have, we have from the very beginning of humanity with things like arrowheads and fire, right. And I fundamentally don't see AI and machine learning as any different. I think what we're trying to do is use technology to solve problems like translation or, you know, maybe automatic identification of objects and images and things like that. Ideally many more interesting problems than that. And one of the big roadblocks comes from taking a basic neural network or some other model trained on some data and actually doing something useful with it. And often it's a vast, vast, vast distance between a model and a lab that can, whatever, take a photograph and identify whether there's an orangutan or a banana in it and build something really useful, like perhaps some sort of medical software that will help you identify skin cancer. Right. And that, that distance ends up being more and more about how to actually make the software work for people deal with the messy real-world constraints that exist in our real, you know, in our actual world. And, you know, this means that like I personally and our team in general, the brain team we've become much more interested in HCI. And I wouldn't say, I think the way you worded it was can machine learning help revolutionize HCI or help HCI or help move HCI along. It's the wrong direction we need there like we need HCI's help. So, so we've, we've been humbled, I think by our inability to take like our fancy algorithms and actually have them matter in people's lives. And I think partially it's because we haven't engaged enough in the past decade or so with the HCI community. And, you know, I personally and a number of people on my, in my world are trying really hard to address that. By tackling problems with like joint viewpoints, that viewpoint of like the mathematically driven AI researcher, caring about what the data is. And then the HCI and the user interface folks were saying, wait, what problem are you trying to solve? And how are you going to actually take what this model can do and put it in the hands of users and how are you going to do it in a way that's ethical per your comment Anaiya? And I hope someone grabbed the analogy of going from an image recognition algorithm to identifying skincancers. This has been one topic, for example, this generated a lot of discussion because skin cancers and skin color correlates with race and the ability for these algorithms to work across a spectrum of skin colors may differ, um, and our ability to build trust with doctors so that they want to use the software and patients, they believe they can trust the software. Like these issues are like so, so complicated and it's so important for us to get them right. So you can tell I'm a passionate about this. I guess I should bring this to a close, which is to say I'm a convert. I guess I have the fervor of a convert who didn't think much about HCI, maybe five, six years ago. I just started to see as these models get more and more powerful that the limiting factor is really how we use them and how we deploy them and how we make them work for us human beings. We're the personified ones, not the software, not the AI. *Anaiya Raisinghani* : [00:17:37] That's awesome. Thank you so much for answering my question, that was great. And I appreciate all the points you brought up because I feel like those need to be talked about a lot more, especially in the AI community. I do want to like pivot a little bit and take part of what you said and talk about some of the issues that come with deep learning and AI, and kind of connect them with neural networks and databases, because I would love to hear about some of the things that have come up in the past when deep learning has been tried to be integrated into databases. And I know that there can be a lot of issues with deep learning and tabular databases, but what about document collection based databases? And if the documents are analogous to records or rows in a relational database, do you think that machine learning might work or do you believe that the same issues might come up? *Doug Eck* :[00:18:24] Another great question. So, so first to put this all in content, arguably a machine learning researcher. Who's really writing code day to day, which I did in the past and now I'm doing more management work, but you're, you know, you're writing code day-to-day, you're trying to solve a hard problem. Maybe 70 or 80% of your time is spent dealing with data and how to manage data and how to make sure that you don't have data errors and how to move the data through your system. Probably like in, in other areas of computer science, you know, we tend to call it plumbing. You spend a lot of time working on plumbing. And this is a manageable task. When you have a dataset of the sort we might've worked with 15 years ago, 10,000, 28 by 28 pixel images or something like that. I hope I got the pixels, right. Something called eminence, a bunch of written digits. If we start looking at datasets that are all of the web basically represented in some way or another, all of the books in the library of Congress as a, as a hypothetical massive, massive image, data sets, massive video data sets, right? The ability to just kind of fake it. Right, write a little bit of Python code that processes your data and throws it in a flat file of some sort becomes, you know, becomes basically untraceable. And so I think we're at an inflection point right now maybe we were even at that inflection point a year or two ago. Where a lot of machine learning researchers are thinking about scalable ways to handle data. So that's the first thing. The second thing is that we're also specifically with respect to very large neural networks, wanting predictions to be factual. If we have a chat bot that chats with you and that chat bot is driven by a neural network and you ask it, what's the capital of Indiana, my home state. We hope it says Indianapolis every time. Uh, we don't want this to be a roll of the dice. We don't want it to be a probabilistic model that rolls the dice and says Indianapolis, you know, 50 times, but 51 time that 51st time instead says Springfield. So there's this very, very active and rich research area of bridging between databases and neural networks, which are probabilistic and finding ways to land in the database and actually get the right answer. And it's the right answer because we verify that it's the right answer. We have a separate team working with that database and we understand how to relate that to some decision-making algorithm that might ask a question: should I go to Indianapolis? Maybe that's a probabilistic question. Maybe it's role as a dice. Maybe you all don't want to come to Indianapolis. It's up to you, but I'm trying to make the distinction between, between these two kinds of, of decisions. Two kinds of information. One of them is probabilistic. Every sentence is unique. We might describe the same scene with a million different sentences. But we don't want to miss on facts, especially if we want to solve hard problems. And so there's an open challenge. I do not have an answer for it. There are many, many smarter people than me working on ways in which we can bridge the gap between products like MongoDB and machine learning. It doesn't take long to realize there are a lot of people thinking about this. If you do a Google search and you limit to the site, reddit.com and you put them on MongoDB and machine learning, you see a lot of discussion about how can we back machine learning algorithms with, with databases. So, um, it's definitely an open topic. Finally. Third, you mentioned something about rows and columns and the actual structure of a relational database. I think that's also very interesting because algorithms that are sensitive, I say algorithm, I mean a neural network or some other model program designed to solve a problem. You know, those algorithms might actually take advantage of that structure. Not just like cope with it, but actually understand in some ways how, in ways that it's learning how to leverage the structure of the database to make it easier to solve certain problems. And then there's evidence outside of, of databases for general machine learning to believe that's possible. So, for example, in work, for example, predicting the structure of proteins and other molecules, we have some what we might call structural prior information we have some idea about the geometry of what molecules should look like. And there are ways to leverage that geometry to kind of limit the space of predictions that the model would make. It's kind of given that structure as, as foundation for, for, for the productions, predictions is making such that it won't likely make predictions that violate that structure. For example, graph neural networks that actually work on a graph. You can write down a database structure as a graph if you'd like, and, and take advantage of that graph for solving hard problems. Sorry, that was, it's like a 10 minute answer. I'll try to make them shorter next time, Anaiya, but that's my answer. *Anaiya Raisinghani* : [00:23:03] Yeah. Cause I, well, I was researching for this and then also when I got the job, a lot of the questions during the interview were, like how you would use machine learning, uh, during my internship and I saw articles like stretching all the way back the early two thousands talking about just how applying, sorry, artificial neural networks and ANN's to large modern databases seems like such a great idea in theory, because you know, like they, they offer potential fault tolerance, they're inherently parallel. Um, and the intersection between them just looks really super attractive. But I found this article about that and like, the date was 2000 and then I looked for other stuff and everything from there was the issues between connecting databases and deep learning. So thank you so much for your answer. I really appreciate that. I feel like, I feel like, especially on this podcast, it was a great, great answer to a hard question. *Doug Eck* :[00:23:57] Can I throw, can I throw one more thing before you move on? There are also some like what I call low hanging fruit. Like a bunch of simpler problems that we can tackle. So one of the big areas of machine learning that I've been working in is, is that of models of, of language of text. Right? And so think of translation, you type in a string in one language, and we translate it to another language or if, and if, if your listeners have paid attention to some, some new um, machine learning models that can, you can chat with them like chatbots, like Google's Lambda or some large language models that can write stories. We're realizing we can use those for data augmentation and, and maybe indirectly for data verification. So we may be able to use neural networks to predict bad data entries. We may be able to, for example, let's say your database is trying to provide a thousand different ways to describe a scene. We may be able to help automate that. And then you'd have a human who's coming in. Like the humans always needs to be there I think to be responsible, you know, saying, okay, here's like, you know, 20 different ways to describe this scene at different levels of complexity, but we use the neural network to help make their work much, much faster. And so if we move beyond trying to solve the entire problem of like, what is a database and how do we generate it, or how do we do upkeep on it? Like, that's one thing that's like the holy grail, but we can be thinking about using neural networks in particularly language models to, to like basically super charge human data, data quality people in ways that I think are just gonna go to sweep through the field and help us do a much, much better job of, of that kind of validation. And even I remember from like a long time ago, when I did databases, data validation is a pain, right? Everybody hates bad data. It's garbage in, garbage out. So if we can make cleaner, better data, then we all win. *Anaiya Raisinghani* : [00:25:39] Yeah. And on the subject of language models, I also wanted to talk about the GPT 3 and I saw an article from MIT recently about how they're thinking it can replace Google's page rank. And I would just love to hear your thoughts on what you think might happen in the future and if language models actually could replace indexing. *Doug Eck* :[00:25:58] So to be clear, we will still need to do indexing, right? We still need to index the documents and we have to have some idea of what they mean. Here's the best way to think about it. So we, we talked to IO this year about using some large language models to improve our search in our products. And we've talked about it in other blogs. I don't want to get myself in trouble by poorly stating what has already been stated. I'd refer you there because you know, nobody wants, nobody wants to have to talk to their boss after the podcast comes out and says, why did you say that? You know, but here's the thing. This strikes me. And this is just my opinion. Google's page rank. For those of you who don't know what page rank is, the basic idea is instead of looking at a document and what the document contains. We decide the value of the document by other documents that link into that document and how much we trust the other documents. So if a number of high profile websites link to a document that happens to be about automobiles, we'll trust that that document is about automobiles, right? Um, and so it's, it's a graph problem where we assign trust and propagate it from, from incoming links. Um, thank you, Larry and Sergei. Behind that is this like fundamental mistrust of being able to figure out what's in a document. Right, like the whole idea is to say, we don't really know what's in this document. So we're going to come up with a trick that allows us to value this document based upon what other documents think about it. Right. And one way you could think about this revolution and large language models, um, like GPT-3 which came from open AI and, um, which is based upon some core technology that came from our group called transformer. That's the T in GPT-3 with there's always friendly rivalries that the folks at Open AI are great. And I think our team is great too. We'll kind of ratcheting up who can, who can move faster, um, cheers to Open AI. Now we have some pretty good ways of taking a document full of words. And if you want to think about this abstractly, projecting it into another space of numbers. So maybe for that document, which may have like as many words as you need for the document, let's say it's between 500 and 2,000 words, right. We take a neural network and we run that sequence through the neural network. And we come out with this vector of numbers that vector, that sequence of numbers maybe it's a thousand numbers right, now, thanks to the neural network that thousand numbers actually does a really good job of describing what's in the document. We can't read it with our eyes, cause it's just a sequence of numbers. But if we take that vector and compare it to other vectors, what we'll find is similar vectors actually contain documents that contain very similar information and they might be written completely differently. Right. But topically they're similar. And so what we get is the ability to understand massive, massive data sets of text vis-a-vis what it's about, what it means, who it's for. And so we have a much better job of what's in a document now, and we can use that information to augment what we know about how people use documents, how they link to them and how much they trust them. And so that just gives us a better way to surface relevant documents for people. And that's kind of the crux in my mind, or at least in my view of why a large language model might matter for a search company. It helps us understand language and fundamentally most of search is about language. *Anaiya Raisinghani* : [00:29:11] I also wanted to talk to you about, because language is one of the big things with AI, but then now there's been a lot of movement towards art and music. And I know that you're really big into that. So I wanted to ask you about for the listeners, if you could explain a little bit behind Magenta, and then I also wanted to talk to you about Yacht because I heard that they used Magenta for yeah. For their new album. And so like, what are your thoughts on utilizing AI to continue on legacies in art and music and just creation? *Doug Eck* :[00:29:45] Okay, cool. Well, this is a fun question for me. Uh, so first what's Magenta? Magenta is an open source project that I'm very proud to say I created initially about six years ago. And our goal with Magenta is to explore the role of machine learning as a tool in the creative process. If you want to find it, it's at g.co/magenta. We've been out there for a long time. You could also just search for Google Magenta and you'll find us, um, everything we do goes in open source basically provide tools for musicians and artists, mostly musicians based upon the team. We are musicians at heart. That you can use to extend your musical, uh, your musical self. You can generate new melodies, you can change how things sound you can understand more, uh, the technology. You can use us to learn JavaScript or Python, but everything we do is about extending people and their music making. So one of the first things I always say is I think it would be, it's kind of cool that we can generate realistic sounding melodies that, you know, maybe sound like Bach or sound like another composer, but that's just not the point. That's not fun. Like, I think music is about people communicating with people. And so we're really more in the, in the heritage of, you know, Les Paul who invented was one of the inventors of the electric guitar or the cool folks that invented guitar pedals or amplifiers, or pick your favorite technology that we use to make a new kind of music. Our real question is can we like build a new kind of musical instrument or a new kind of music making experience using machine learning. And we've spent a lot of time doing fundamental research in this space, published in conferences and journals of the sort that all computer scientists do. And then we've done a lot of open source work in JavaScript so that you can do stuff really fast in the browser. Also plugins for popular software for musicians like Ableton and then sort of core hardcore machine learning in Python, and we've done some experimental work with some artists. So we've tried to understand better on the HCI side, how this all works for real artists. And one of the first groups we worked with is in fact, thank you for asking a group called Yacht. They're phenomenal in my mind, a phenomenal pop band. I think some part LCD sound system. I don't know who else to even add. They're from LA their front person. We don't say front man, because it's Claire is Claire Evans. She's an amazing singer, an utterly astonishing presence on stage. She's also a tech person, a tech writer, and she has a great book out that everybody should read, especially every woman in tech, Anaiya, called BroadBand the story of, um, of women in the internet. I mean, I don't remember if I've got the subtitle, right. So anyway very interesting people and what they did was they came to us and they worked with a bunch of other AI folks, not just Google at all. Like we're one of like five or six collaborators and they just dove in headfirst and they just wrestled with the technology and they tried to do something interesting. And what they did was they took from us, they took a machine learning model. That's able to generate variations on a theme. So, and they use pop music. So, you know, you give it right. And then suddenly the model is generating lots of different variations and they can browse around the space and they can play around and find different things. And so they had this like a slight AI extension of themselves. Right. And what they did was utterly fascinating. I think it's important. Um, they, they first just dove in and technically dealt with the problems we had. Our HCI game was very low then like we're like quite, quite literally first type this pro type this command into, into, into a console. And then it'll generate some midi files and, you know, there are musicians like they're actually quite technically good, but another set of musicians of like what's a command line. Right. You know, like what's terminal. So, you know, you have these people that don't work with our tooling, so we didn't have anything like fancy for them. But then they also set constraints. So, uh, Jona and Rob the other two folks in the band, they came up with kind of a rule book, which I think is really interesting. They said, for example, if we take a melody generated by the Magenta model, we won't edit it ever, ever, ever. Right. We might reject it. Right. We might listen to a bunch of them, but we won't edit it. And so in some sense, they force themselves to like, and I think if they didn't do that, it would just become this mush. Like they, they wouldn't know what the AI had actually done in the end. Right. So they did that and they did the same with another, uh, some other folks, uh, generating lyrics, same idea. They generated lots and lots of lyrics. And then Claire curated them. So curation was important for them. And, uh, this curation process proved to be really valuable for them. I guess I would summarize it as curation, without editing. They also liked the mistakes. They liked when the networks didn't do the right thing. So they liked breakage like this idea that, oh, this didn't do what it was supposed to. I like that. And so this combination of like curiosity work they said it was really hard work. Um, and in a sense of kind of building some rules, building a kind of what I would call it, grammar around what they're doing the same way that like filmmakers have a grammar for how you tell a story. They told a really beautiful story, and I don't know. I'm I really love Chain Tripping. That's the album. If you listened to it, every baseline was written by a magenta model. The lyrics were written by, uh, an LSTM network by another group. The cover art is done by this brilliant, uh, artists in Australia, Tom white, you know, it's just a really cool album overall. *Anaiya Raisinghani* : [00:35:09] Yeah, I've listened to it. It's great. I feel like it just alludes to how far technology has come. *Doug Eck* :[00:35:16] I agree. Oh, by the way that the, the drum beats, the drum beats come from the same model. But we didn't actually have a drum model. So they just threw away the notes and kept the durations, you know, and the baselines come from a model that was trained on piano, where the both of, both of both Rob and Jona play bass, but Rob, the guy who usually plays bass in the band is like, it would generate these baselines that are really hard to play. So you have this like, idea of like the AI is like sort of generating stuff that they're just physically not used to playing on stage. And so I love that idea too, that it's like pushing them, even in ways that like onstage they're having to do things slightly differently with their hands than they would have to do. Um, so it's kind of pushes them out. *Michael Lynn* : [00:35:54] So I'm curious about the authoring process with magenta and I mean, maybe even specifically with the way Yacht put this album together, what are the input files? What trains the system. *Doug Eck* :[00:36:07] So in this case, this was great. We gave them the software, they provided their own midi stems from their own work. So, that they really controlled the process. You know, our software has put out and is licensed for, you know, it's an Apache license, but we make no claims on what's being created. They put in their own data, they own it all. And so that actually made the process much more interesting. They weren't like working with some like weird, like classical music, piano dataset, right. They were like working with their own stems from their own, um, their own previous recordings. *Michael Lynn* : [00:36:36] Fantastic. *Anaiya Raisinghani* : [00:36:38] Great. For my last question to kind of round this out, I just wanted to ask, what do you see that's shocking and exciting about the future of machine learning. *Doug Eck* :[00:36:49] I'm so bad at crystal ball. Um, *Michael Lynn* : [00:36:53] I love the question though. *Doug Eck* :[00:36:56] Yeah. So, so here, I think, I think first, we should always be humble about what we've achieved. If you, if you look, you know, humans are really smart, like way smarter than machines. And if you look at the generated materials coming from deep learning, for example, faces, when they first come out, whatever new model first comes out, like, oh my God, I can't tell them from human faces. And then if you play with them for a while, you're like, oh yeah, they're not quite right. They're not quite right. And this has always been true. I remember reading about like when the phonograph first came out and they would, they would demo the phonograph on, on like a stage in a theater. And this is like a, with a wax cylinder, you know? People will leave saying it sounds exactly like an orchestra. I can't tell it apart. Right. They're just not used to it. Right. And so like first I think we should be a little bit humble about what we've achieved. I think, especially with like GPT-3, like models, large language models, we've achieved a kind of fluency that we've never achieved before. So the model sounds like it's doing something, but like it's not really going anywhere. Right. And so I think, I think by and large, the real shocking new, new breakthroughs are going to come as we think about how to make these models controllable so can a user really shape the output of one of these models? Can a policymaker add layers to the model that allow it to be safer? Right. So can we really have like use this core neural network as, you know, as a learning device to learn the things that needs to define patterns in data, but to provide users with much, much more control about how, how those patterns are used in a product. And that's where I think we're going to see the real wins, um, an ability to actually harness this, to solve problems in the right way. *Anaiya Raisinghani* : [00:38:33] Perfect. Doug, thank you so much for coming on today. It was so great to hear from you. *Doug Eck* :[00:38:39] That was great. Thanks for all the great questions, Anaiya, was fantastic *Michael Lynn* : [00:38:44] I'll reiterate that. Thanks so much, Doug. It's been great chatting with you. Thanks for listening. If you enjoyed this episode, please like, and subscribe, have a question or a suggestion for the show? Visit us in the MongoDB community forums at community.Mongodb.com. Thank you so much for taking the time to listen to our episode today. If you would like to learn more about Doug’s work at Google, you can find him through his [LinkedIn profile or his Google Research profile. If you have any questions or comments about the episode, please feel free to reach out to Anaiya Raisinghani, Michael Lynn, or Nic Raboy. You can also find this, and all episodes of the MongoDB Podcast on your favorite podcast network. * Apple Podcasts * Google Podcasts * Spotify
md
{ "tags": [ "MongoDB" ], "pageDescription": "Douglas Eck is a Principal Scientist at Google Research and a research director on the Brain Team. His work lies at the intersection of machine learning and human-computer interaction (HCI). Doug created and helps lead Magenta (g.co/magenta), an ongoing research project exploring the role of machine learning in the process of creating art and music. This article is a transcript of the podcast episode where Anaiya Rasinghani leads an interview with Doug to learn more about the intersection between AI, ML, HCI, and Databases.", "contentType": "Podcast" }
At the Intersection of AI/ML and HCI with Douglas Eck of Google (MongoDB Podcast)
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/5-different-ways-deploy-free-database-mongodb-atlas
created
# 5 Different Ways to Deploy a Free Database with MongoDB Atlas You might have already known that MongoDB offers a free tier through M0 clusters on MongoDB Atlas, but did you know that there are numerous ways to deploy depending on your infrastructure needs? To be clear, there's no wrong way to deploy a MongoDB Atlas cluster, but there could be an easier way to fit your operations needs. In this article, we're going to have a quick look at the various ways you can deploy a MongoDB Atlas cluster using tools like Terraform, CloudFormation, CLIs, and simple point and click. ## Using the Atlas Web UI to Deploy a Cluster If you're a fan of point and click deployments like I am, the web UI for MongoDB Atlas will probably fit your needs. Let's take a quick look at how to deploy a new cluster with a database using the UI found within the MongoDB Cloud Dashboard. Within the **Databases** tab for your account, if you don't have any databases or clusters, you'll be presented with the opportunity to build one using the "Build a Database" button. Since we're keeping things free for this article, let's choose the "Shared" option when presented on the next screen. If you think you'll need something else, don't let me stop you! After selecting "Shared" from the options, you'll be able to create a new cluster by first selecting your cloud service provider and region. You can use the defaults, or select a provider or region that you would prefer to use. Your choice has no impact on how you will end up working with your cluster. However, choosing a provider and location that matches your other services could render performance improvements. After selecting the "Create Cluster" button, your cluster will deploy. This could take a few minutes depending on your cluster size. At this point, you can continue exploring Atlas, create a database or two, and be on your way to creating great applications. A good next step after deploying your cluster would be adding entries to your access list. You can learn how to do that here. Let's say you prefer a more CLI-driven approach. ## Using the MongoDB CLI to Deploy a Cluster The MongoDB CLI can be useful if you want to do script-based deployments or if you prefer to do everything from the command line. To install the MongoDB CLI, check out the installation documentation and follow the instructions. You'll also need to have a MongoDB Cloud account created. If this is your first time using the MongoDB CLI, check out the configuration documentation to learn how to add your credentials and other information. For this example, we're going to use the quick start functionality that the CLI offers. From the CLI, execute the following: ```bash mongocli atlas quickstart ``` Using the quick start approach, you'll be presented with a series of questions regarding how you want your Atlas cluster configured. This includes the creation of users, network access rules, and other various pieces of information. To see some of the other options for the CLI, check out the documentation. ## Using the Atlas Admin API to Deploy a Cluster A similar option to using the CLI for creating MongoDB Atlas clusters is to use the Atlas Admin API. One difference here is that you don't need to download or install any particular CLI and you can instead use HTTP requests to get the job done using anything capable of making HTTP requests. Take the following HTTP request, for example, one that can still be executed from the command prompt: ``` curl --location --request POST 'https://cloud.mongodb.com/api/atlas/v1.0/groups/{GROUP_ID}/clusters?pretty=true' \ --user "{PUBLIC_KEY}:{PRIVATE_KEY}" --digest \ --header 'Content-Type: application/json' \ --data-raw '{ "name": "MyCluster", "providerSettings": { "providerName": "AWS", "instanceSizeName": "M10", "regionName": "US_EAST_1" } }' ``` The above cURL request is a trimmed version, containing just the required parameters, taken from the Atlas Admin API documentation. You can try the above example after switching the `GROUP_ID`, `PUBLIC_KEY`, and `PRIVATE_KEY` placeholders with those found in your Atlas dashboard. The `GROUP_ID` is the project id representing where you'd like to create your cluster. The `PUBLIC_KEY` and `PRIVATE_KEY` are the keys for a particular project with proper permissions for creating clusters. The same cURL components can be executed in a programming language or even a tool like Postman. The Atlas Admin API is not limited to just cURL using a command line. While you can use the Atlas Admin API to create users, apply access rules, and similar, it would take a few different HTTP requests in comparison to what we saw with the CLI because the CLI was designed to make these kinds of interactions a little easier. For information on the other optional fields that can be used in the request, refer to the documentation. ## Using HashiCorp Terraform to Deploy a Cluster There's a chance that your organization is already using an infrastructure-as-code (IaC) solution such as Terraform. The great news is that we have a Terraform provider for MongoDB Atlas that allows you to create a free Atlas database easily. Take the following example Terraform configuration: ``` locals { mongodb_atlas_api_pub_key = "PUBLIC_KEY" mongodb_atlas_api_pri_key = "PRIVATE_KEY" mongodb_atlas_org_id = "ORG_ID" mongodb_atlas_project_id = "PROJECT_ID" } terraform { required_providers { mongodbatlas = { source = "mongodb/mongodbatlas" version = "1.1.1" } } } provider "mongodbatlas" { public_key = local.mongodb_atlas_api_pub_key private_key = local.mongodb_atlas_api_pri_key } resource "mongodbatlas_cluster" "my_cluster" { project_id = local.mongodb_atlas_project_id name = "terraform" provider_name = "TENANT" backing_provider_name = "AWS" provider_region_name = "US_EAST_1" provider_instance_size_name = "M0" } output "connection_strings" { value = mongodbatlas_cluster.my_cluster.connection_strings.0.standard_srv } ``` If you added the above configuration to a **main.tf** file and swapped out the information at the top of the file with your own, you could execute the following commands to deploy a cluster with Terraform: ``` terraform init terraform plan terraform apply ``` The configuration used in this example was taken from the Terraform template accessible within the Visual Studio Code Extension for MongoDB. However, if you'd like to learn more about Terraform with MongoDB, check out the official provider information within the Terraform Registry. ## Using AWS CloudFormation to Deploy a Cluster If your applications are all hosted in AWS, then CloudFormation, another IaC solution, may be one you want to utilize. If you're interested in a script-like configuration for CloudFormation, Cloud Product Manager Jason Mimick wrote a thorough tutorial titled Get Started with MongoDB Atlas and AWS CloudFormation. However, like I mentioned earlier, I'm a fan of a point and click solution. A point and click solution can be accomplished with AWS CloudFormation! Navigate to the MongoDB Atlas on AWS page and click "How to Deploy." You'll have a few options, but the simplest option is to launch the Quick Start for deploying without VPC peering. The next steps involve following a four-part configuration and deployment wizard. The first step consists of selecting a configuration template. Unless you know your way around CloudFormation, the defaults should work fine. The second step of the configuration wizard is for defining the configuration information for MongoDB Atlas. This is what was seen in other parts of this article. Replace the fields with your own information, including the public key, private key, and organization id to be used with CloudFormation. Once more, these values can be found and configured within your MongoDB Atlas Dashboard. The final stage of the configuration wizard is for defining permissions. For the sake of this article, everything in the final stage will be left with the default provided information, but feel free to use your own. Once you review the CloudFormation configuration, you can proceed to the deployment, which could take a few minutes. As I mentioned, if you'd prefer not to go through this wizard, you can also explore a more scripted approach using the CloudFormation and AWS CLI. ## Conclusion You just got an introduction to some of the ways that you can deploy MongoDB Atlas clusters. Like I mentioned earlier, there isn't a wrong way, but there could be a better way depending on how you're already managing your infrastructure. If you get stuck with your MongoDB Atlas deployment, navigate to the MongoDB Community Forums for some help!
md
{ "tags": [ "MongoDB" ], "pageDescription": "Learn how to quickly and easily deploy a MongoDB Atlas cluster using a variety of methods such as CloudFormation, Terraform, the CLI, and more.", "contentType": "Quickstart" }
5 Different Ways to Deploy a Free Database with MongoDB Atlas
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-massive-number-collections
created
# Massive Number of Collections In the first post in this MongoDB Schema Design Anti-Patterns series, we discussed how we should avoid massive arrays when designing our schemas. But what about having a massive number of collections? Turns out, they're not great either. In this post, we'll examine why. > > >:youtube]{vid=8CZs-0it9r4 t=719} > >Are you more of a video person? This is for you. > > ## Massive Number of Collections Let's begin by discussing why having a massive number of collections is an anti-pattern. If storage is relatively cheap, who cares how many collections you have? Every collection in MongoDB [automatically has an index on the \_id field. While the size of this index is pretty small for empty or small collections, thousands of empty or unused indexes can begin to drain resources. Collections will typically have a few more indexes to support efficient queries. All of these indexes add up. Additionally, the WiredTiger storage engine (MongoDB's default storage engine) stores a file for each collection and a file for each index. WiredTiger will open all files upon startup, so performance will decrease when an excessive number of collections and indexes exist. In general, we recommend limiting collections to 10,000 per replica set. When users begin exceeding 10,000 collections, they typically see decreases in performance. To avoid this anti-pattern, examine your database and remove unnecessary collections. If you find that you have an increasing number of collections, consider remodeling your data so you have a consistent set of collections. ## Example Let's take an example from the greatest tv show ever created: Parks and Recreation. Leslie is passionate about maintaining the parks she oversees, and, at one point, she takes it upon herself to remove the trash in the Pawnee River. Let's say she wants to keep a minute-by-minute record of the water level and temperature of the Pawnee River, the Eagleton River, and the Wamapoke River, so she can look for trends. She could send her coworker Jerry to put 30 sensors in each river and then begin storing the sensor data in a MongoDB database. One way to store the data would be to create a new collection every day to store sensor data. Each collection would contain documents that store information about one reading for one sensor. ``` javascript // 2020-05-01 collection { "_id": ObjectId("5eac643e64faf3ff31d70d35"), "river": "PawneeRiver", "sensor": 1 "timestamp": "2020-05-01T00:00:00Z", "water-level": 61.56, "water-temperature": 72.1 }, { "_id": ObjectId("5eac643e64faf3ff31d70d36"), "river": "PawneeRiver", "sensor": 2 "timestamp": "2020-05-01T00:00:00Z", "water-level": 61.55, "water-temperature": 72.1 }, ... { "_id": ObjectId("5eac643e64faf3ff31d70dfc"), "river": "WamapokeRiver", "sensor": 90 "timestamp": "2020-05-01T23:59:00Z", "water-level": 72.03, "water-temperature": 64.1 } // 2020-05-02 collection { "_id": ObjectId("5eac644c64faf3ff31d90775"), "river": "PawneeRiver", "sensor": 1 "timestamp": "2020-05-02T00:00:00Z", "water-level": 63.12, "water-temperature": 72.8 }, { "_id": ObjectId("5eac644c64faf3ff31d90776"), "river": "PawneeRiver", "sensor": 2 "timestamp": "2020-05-02T00:00:00Z", "water-level": 63.11, "water-temperature": 72.7 }, ... { "_id": ObjectId("5eac644c64faf3ff31d9079c"), "river": "WamapokeRiver", "sensor": 90 "timestamp": "2020-05-02T23:59:00Z", "water-level": 71.58, "water-temperature": 66.2 } ``` Let's say that Leslie wants to be able to easily query on the `river` and `sensor` fields, so she creates an index on each field. If Leslie were to store hourly data throughout all of 2019 and create two indexes in each collection (in addition to the default index on `_id`), her database would have the following stats: - Database size: 5.2 GB - Index size: 1.07 GB - Total Collections: 365 Each day she creates a new collection and two indexes. As Leslie continues to collect data and her number of collections exceeds 10,000, the performance of her database will decline. Also, when Leslie wants to look for trends across weeks and months, she'll have a difficult time doing so since her data is spread across multiple collections. Let's say Leslie realizes this isn't a great schema, so she decides to restructure her data. This time, she decides to keep all of her data in a single collection. She'll bucket her information, so she stores one hour's worth of information from one sensor in each document. ``` javascript // data collection { "_id": "PawneeRiver-1-2019-05-01T00:00:00.000Z", "river": "PawneeRiver", "sensor": 1, "readings": { "timestamp": "2019-05-01T00:00:00.000+00:00", "water-level": 61.56, "water-temperature": 72.1 }, { "timestamp": "2019-05-01T00:01:00.000+00:00", "water-level": 61.56, "water-temperature": 72.1 }, ... { "timestamp": "2019-05-01T00:59:00.000+00:00", "water-level": 61.55, "water-temperature": 72.0 } ] }, ... { "_id": "PawneeRiver-1-2019-05-02T00:00:00.000Z", "river": "PawneeRiver", "sensor": 1, "readings": [ { "timestamp": "2019-05-02T00:00:00.000+00:00", "water-level": 63.12, "water-temperature": 72.8 }, { "timestamp": "2019-05-02T00:01:00.000+00:00", "water-level": 63.11, "water-temperature": 72.8 }, ... { "timestamp": "2019-05-02T00:59:00.000+00:00", "water-level": 63.10, "water-temperature": 72.7 } ] } ... ``` Leslie wants to query on the `river` and `sensor` fields, so she creates two new indexes for this collection. If Leslie were to store hourly data for all of 2019 using this updated schema, her database would have the following stats: - Database size: 3.07 GB - Index size: 27.45 MB - Total Collections: 1 By restructuring her data, she sees a massive reduction in her index size (1.07 GB initially to 27.45 MB!). She now has a single collection with three indexes. With this new schema, she can more easily look for trends in her data because it's stored in a single collection. Also, she's using the default index on `_id` to her advantage by storing the hour the water level data was gathered in this field. If she wants to query by hour, she already has an index to allow her to efficiently do so. For more information on modeling time-series data in MongoDB, see [Building with Patterns: The Bucket Pattern. ## Removing Unnecessary Collections In the example above, Leslie was able to remove unnecessary collections by changing how she stored her data. Sometimes, you won't immediately know what collections are unnecessary, so you'll have to do some investigating yourself. If you find an empty collection, you can drop it. If you find a collection whose size is made up mostly of indexes, you can probably move that data into another collection and drop the original. You might be able to use $merge to move data from one collection to another. Below are a few ways you can begin your investigation. ### Using MongoDB Atlas If your database is hosted in Atlas, navigate to the Atlas Data Explorer. The Data Explorer allows you to browse a list of your databases and collections. Additionally, you can get stats on your database including the database size, index size, and number of collections. If you are using an M10 cluster or larger on Atlas, you can also use the Real-Time Performance Panel to check if your application is actively using a collection you're considering dropping. ### Using MongoDB Compass Regardless of where your MongoDB database is hosted, you can use MongoDB Compass, MongoDB's desktop GUI. Similar to the Data Explorer, you can browse your databases and collections so you can check for unused collections. You can also get stats at the database and collection levels. ### Using the Mongo Shell If you prefer working in a terminal instead of a GUI, connect to your database using the mongo shell. To see a list of collections, run `db.getCollectionNames()`. Output like the following will be displayed: ``` javascript "2019-01-01", "2019-01-02", "2019-01-03", "2019-01-04", "2019-01-05", ... ] ``` To retrieve stats about your database, run `db.stats()`. Output like the following will be displayed: ``` javascript { "db" : "riverstats", "collections" : 365, "views" : 0, "objects" : 47304000, "avgObjSize" : 118, "dataSize" : 5581872000, "storageSize" : 1249677312, "numExtents" : 0, "indexes" : 1095, "indexSize" : 1145790464, "scaleFactor" : 1, "fsUsedSize" : 5312217088, "fsTotalSize" : 10726932480, "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1588795184, 3), "signature" : { "hash" : BinData(0,"orka3bVeAiwlIGdbVoP+Fj6N01s="), "keyId" : NumberLong("6821929184550453250") } }, "operationTime" : Timestamp(1588795184, 3) } ``` You can also run `db.collection.stats()` to see information about a particular collection. ## Summary Be mindful of creating a massive number of collections as each collection likely has a few indexes associated with it. An excessive number of collections and their associated indexes can drain resources and impact your database's performance. In general, try to limit your replica set to 10,000 collections. Come back soon for the next post in this anti-patterns series! > > >When you're ready to build a schema in MongoDB, check out [MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB. With a forever-free tier, you're on your way to realizing the full value of MongoDB. > > ## Related Links Check out the following resources for more information: - MongoDB Docs: Reduce Number of Collections - MongoDB Docs: Data Modeling Introduction - MongoDB Docs: Use Buckets for Time-Series Data - MongoDB University M320: Data Modeling - Blog Series: Building with Patterns
md
{ "tags": [ "MongoDB" ], "pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Massive Number of Collections", "contentType": "Article" }
Massive Number of Collections
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/cross-cluster-search
created
# Cross Cluster Search Using Atlas Search and Data Federation The document model is the best way to work with data, and it’s the main factor to drive MongoDB popularity. The document model is also helping MongoDB innovate it's own solutions to power the world's most sophisticated data requirements. Data federation, allows you to form federated database instances that span multiple data sources like different Atlas clusters, AWS S3 buckets, and other HTTPs sources. Now, one application or service can work with its individual cluster with dedicated resources and data compliance while queries can run on a union of the datasets. This is great for analytics or those global view dashboards and many other use cases in distributed systems. Atlas Search is also an emerging product that allows applications to build relevance-based search powered by Lucene directly on their MongoDB collections. While both products are amazing on their own, they can work together to form a multi-cluster, robust text search to solve challenges that were hard to solve beforehand. ## Example use case Plotting attributes on a map based on geo coordinates is a common need for many applications. Complex code needs to be added if we want to merge different search sources into one data set based on the relevance or other score factors within a single request. With Atlas federated queries run against Atlas search indexes, this task becomes as easy as firing one query. In my use case, I have two clusters: cluster-airbnb (Airbnb data) and cluster-whatscooking (restaurant data). For most parts of my applications, both data sets have nothing really in common and are therefore kept in different clusters for each application. However, if I am interested in plotting the locations of restaurants and Airbnbs (and maybe shops, later) around the user, I have to merge the datasets together with a search index built on top of the merged data. ## With federated queries, everything becomes easier As mentioned above, the two applications are running on two separated Atlas clusters due to their independent microservice nature. They can even be placed on different clouds and regions, like in this picture. The restaurants data is stored in a collection named “restaurants” followed by a common modeling, such as grades/menu/location. The Airbnb application stores a different data set model keeping Airbnb data, such as bookings/apartment details/location. The power of the document model and federated queries is that those data sets can become one if we create a federated database instance and group them under a “virtual collection” called “pointsOfInterest.” The data sets can now be queried as if we have a collection named “pointsOfInterest” unioning the two. ## Lets add Atlas Search to the mix Since the collections are located on Atlas, we can easily use Atlas search to individually index each. It’s also most probable that we already did that as our underlying applications require search capabilities of restaurants and Airbnb facilities. However, if we make sure that the names of the indexes are identical—for example, “default”—and that key fields for special search—like geo—are the same (e.g., “location”), we can run federated search queries on “pointsOfInterest.” We are able to do that since the federated queries are propagated to each individual data source that comprise the virtual collection. With Atlas Search, it's surprisingly powerful as we can get results with a correct merging of the search scores between all of our data sets. This means that if geo search points of interest are close to my location, we will get either Airbnb or restaurants correctly ordered by the distance. What’s even cooler is that Atlas Data Federation intelligently “pushes down” as much of a query as possible, so the search operation will be done locally on the clusters and the union will be done in the federation layer, making this operation as efficient as possible. ## Finally, let's chart it up We can take the query we just ran in Compass and export it to MongoDB Charts, our native charting offering that can directly connect to a federated database instance, plotting the data on a map: :charts]{url="https://charts.mongodb.com/charts-search-demos-rtbgg" id="62cea0c6-2fb0-4a7e-893f-f0e9a2d1ef39"} ## Wrap-up With new products come new power and possibilities. Joining the forces of [Data Federation and Atlas Search allows creators to easily form applications like never before. Start innovating today with MongoDB Atlas.
md
{ "tags": [ "Atlas" ], "pageDescription": "Atlas Data Federation opens a new world of data opportunities. Cross cluster search is available on MongoDB Atlas by combining the power of data federation on different Atlas Search indexes scattered cross different clusters, regions or even cloud providers.", "contentType": "Article" }
Cross Cluster Search Using Atlas Search and Data Federation
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/swift/build-host-docc-documentation-using-github-actions-netlify
created
# Continuously Building and Hosting our Swift DocC Documentation using Github Actions and Netlify In a past post of this series, we showed how easy it was to generate documentation for our frameworks and libraries using DocC and the benefits of doing it. We also saw the different content we can add, like articles, how-tos, and references for our functions, classes, and structs. But once generated, you end up with an archived DocC folder that is not _that_ easy to share. You can compress it, email it, put it somewhere in the cloud so it can be downloaded, but this is not what we want. We want: * Automatic (and continuous) generation of our DocC documentation bundle. * Automatic (and continuous) posting of that documentation to the web, so it can be read online. ## What’s in a DocC bundle? A `.doccarchive` archive is, like many other things in macOS, a folder. Clone the repository we created with our documentation and look inside `BinaryTree.doccarchive` from a terminal. ```bash git clone https://github.com/mongodb-developer/realm-binary-tree-docc cd BinaryTree.doccarchive ``` You’ll see: ``` .. ├── css │ ├── … │ └── tutorials-overview.7d1da3df.css ├── data │ ├── documentation │ │ ├── binarytree │ │ │ ├── … │ │ │ └── treetraversable-implementations.json │ └── tutorials │ ├── binarytree │ │ ├── … │ │ └── traversingtrees.json │ └── toc.json ├── downloads ├── favicon.ico ├── favicon.svg ├── images │ ├── … │ └── tree.png ├── img │ ├── … │ └── modified-icon.5d49bcfe.svg ├── index │ ├── availability.index │ ├── data.mdb │ ├── lock.mdb │ └── navigator.index ├── index.html ├── js │ ├── chunk-2d0d3105.459bf725.js │ ├ … │ └── tutorials-overview.db178ab9.js ├── metadata.json ├── theme-settings.json └── videos ``` This is a single-page web application. Sadly, we can’t just open `index.html` and expect it to render correctly. As Apple explains in the documentation, for this to work, it has to be served from a proper web server, with a few rewrite rules added: > To host a documentation archive on your website, do the following: > > 1. Copy the documentation archive to the directory that your web server uses to serve files. In this example, the documentation archive is SlothCreator.doccarchive. > 1. Add a rule on the server to rewrite incoming URLs that begin with /documentation or /tutorial to SlothCreator.doccarchive/index.html. > 1. Add another rule for incoming requests to support bundled resources in the documentation archive, such as CSS files and image assets. They even add a sample configuration to use with the Apache `httpd` server. So, to recap: * We can manually generate our documentation and upload it to a web server. * We need to add the rewrite rules described in Apple’s documentation for the DocC bundle to work properly. Each time we update our documentation, we need to generate it and upload it. Let’s generate our docs automatically. ## Automating generation of our DocC archive using GitHub Actions We’ll continue using our Binary Tree Package as an example to generate the documentation. We’ll add a GitHub Action to generate docs on each new push to main. This way, we can automatically refresh our documentation with the latest changes introduced in our library. To add the action, we’ll click on the `Actions` button in our repo. In this case, a Swift Action is offered as a template to start. We’ll choose that one: After clicking on `Configure`, we can start tweaking our action. A GitHub action is just a set of steps that GitHub runs in a container for us. There are predefined steps, or we can just write commands that will work in our local terminal. What we need to do is: * Get the latest version of our code. * Build out our documentation archive. * Find where the `doccarchive` has been generated. * Copy that archive to a place where it can be served online. We’ll call our action `docc.yml`. GitHub actions are YAML files, as the documentation tells us. After adding them to our repository, they will be stored in `.github/workflows/`. So, they’re just text files we can edit locally and push to our repo. ### Getting the latest version of our code This is the easy part. Every time a Github action starts, it creates a new, empty container and clones our repo. So, our code is there, ready to be compiled, pass all tests, and does everything we need to do with it. Our action starts with: ```yaml name: Generate DocC on: push: branches: main ] jobs: Build-Github-Actions: runs-on: macos-latest steps: - name: Git Checkout uses: actions/checkout@v2 ``` So, here: * We gave the action the name “Generate DocC”. * Then we select when it’ll run, i.e., on any pushes to `main`. * We run this on a macOS container, as we need Xcode. * The first step is to clone our repo. We use a predefined action, `checkout`, that GitHub provides us with. ### Building out our documentation archive Now that our code is in place, we can use `xcodebuild` to build the DocC archive. We can [build our projects from the command line, run our tests, or in this case, build the documentation. ```bash xcodebuild docbuild -scheme BinaryTree -derivedDataPath ./docbuild -destination 'platform=iOS Simulator,OS=latest,name=iPhone 13 mini' ``` Here we’re building to generate DocC (`docbuild` parameter), choosing the `BinaryTree` scheme in our project, putting all generated binaries in a folder at hand (`docbuild`), and using an iPhone 13 mini as Simulator. When we build our documentation, we need to compile our library too. That’s why we need to choose the Simulator (or device) used for building. ### Find where the `doccarchive` has been generated If everything goes well, we’ll have our documentation built inside `docbuild`. We’ll search for it, as each build will generate a different hash to store the results of our build. And this is, on each run, a clean machine. To find the archive, we use: ```bash find ./docbuild -type d -iname "BinaryTree.doccarchive" ``` ### Copy our documentation to a place where it can be served online Now that we know where our DocC archive is, it’s time to put it in a different repository. The idea is we’ll have one repository for our code and one for our generated DocC bundle. Netlify will read from this second repository and host it online. So, we clone the repository that will hold our documentation with: ```bash git clone https://github.com/mongodb-developer/realm-binary-tree-docc ``` So, yes, now we have two repositories, one cloned at the start of the action and now this one that holds only the documentation. We copy over the newly generated DocC archive: ```bash cp -R "$DOCC_DIR" realm-binary-tree-docc ``` And we commit all changes: ```bash cd realm-binary-tree-docc git add . git commit -m "$DOC_COMMIT_MESSAGE" git status ``` Here, `$DOC_COMMIT_MESSAGE` is just a variable we populate with the last commit message from our repo and current date. But it can be any message. After this, we need to push the changes to the documentation repository. ```bash git config --get remote.origin.url git remote set-url origin https://${{ secrets.API_TOKEN_GITHUB}}@github.com/mongodb-developer/realm-binary-tree-docc git push origin ``` Here we first print our `origin` (the repo where we’ll be pushing our changes) with ```bash git config --get remote.origin.url ``` This command will show the origin of a git repository. It will print the URL of our code repository. But this is not where we want to push. We want to push to the _documentation_ repository. So, we set the origin pointing to https://github.com/mongodb-developer/realm-binary-tree-docc. As we will need permission to push changes, we authenticate using a Personal Access Token. From Github Documentation on Personal Access Tokens: > You should create a personal access token to use in place of a password with the command line or with the API. Luckily, Github Actions has a way to store these secrets, so they’re publicly accessible. Just go to your repository’s Settings and expand Secrets. You’ll see an “Actions” option. There you can give your secret a name to be used later in your actions. For reference, this is the complete action I’ve used. ## Hosting our DocC archives in Netlify As shown in this excellent post by Joseph Duffy, we'll be hosting our documentation in Netlify. Creating a free account is super easy. In this case, I advise you to use your Github credentials to log in Netlify. This way, adding a new site that reads from a Github repo will be super easy. Just add a new site and select Import an existing project. You can then choose Github, and once authorized, you’ll be able to select one of your repositories. Now I set it to deploy with “Any pull request against your production branch / branch deploy branches.” So, every time your repo changes, Netlify will pick up the change and host it online (if it’s a web app, that is). But we’re missing just one detail. Remember I mentioned before that we need to add some rewrite rules to our hosted documentation? We’ll add those in a file called `netlify.toml`. This file looks like: ```toml build] publish = "BinaryTree.doccarchive/" [[redirects]] from = "/documentation/*" status = 200 to = "/index.html" [[redirects]] from = "/tutorials/*" status = 200 to = "/index.html" [[redirects]] from = "/data/documentation.json" status = 200 to = "/data/documentation/binarytree.json" [[redirects]] force = true from = "/" status = 302 to = "/documentation/" [[redirects]] force = true from = "/documentation" status = 302 to = "/documentation/" [[redirects]] force = true from = "/tutorials" status = 302 to = "/tutorials/" ``` To use it in your projects, just review the lines: ```toml publish = "BinaryTree.doccarchive/" … to = "/data/documentation/binarytree.json" ``` And change them accordingly. ## Recap In this post, we’ve seen how to: * Add a Github Action to a [code repository that continuously builds a DocC documentation bundle every time we push a change to the code. * That action will in turn push that newly built documentation to a documentation repository for our library. * That documentation repository will be set up in Netlify and add some rewrite rules so we'll be able to host it online. Don’t wait and add continuous generation of your library’s documentation to your CI pipeline!
md
{ "tags": [ "Swift", "Realm", "GitHub Actions" ], "pageDescription": "In this post we'll see how to use Github Actions to continuously generate the DocC documentation for our Swift libraries and how to publish this documentation so that can be accessed online, using Netlify.", "contentType": "Tutorial" }
Continuously Building and Hosting our Swift DocC Documentation using Github Actions and Netlify
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/realm/swift-ui-meetup
created
# SwiftUI Best Practices with Realm Didn't get a chance to attend the SwiftUI Best Practices with Realm Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up. >SwiftUI Best Practices with Realm > >:youtube]{vid=mTv96vqTDhc} In this event, Jason Flax, the engineering lead for the Realm iOS team, explains what SwiftUI is, why it's important, how it will change mobile app development, and demonstrates how Realm's integration with SwiftUI makes it easy for iOS developers to leverage this framework. In this 50-minute recording, Jason covers: - SwiftUI Overview and Benefits - SwiftUI Key Concepts and Architecture - Realm Integration with SwiftUI - Realm Best Practices with SwiftUI > **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. [Get started now by build: Deploy Sample for Free! ## Transcript **Ian Ward**: All right, I think we are just about ready to kick it off. So to kick it off here, this is a new year and this is our kind of first inaugural meeting for the user group for the Realm user group. What we're going to do over the course of this next year is have and schedule these at least once a month if not multiple times a month. Where we kind of give a talk about a particular topic and then we can have a Q&A. And this is kind of just about exchanging new and interesting ideas. Some of them will be about iOS, obviously we have other SDKs. It could be about Android. Some of them will come from the Realm team. Others could come from the community. So if you have a particular talk or something you want to talk about, please reach out to us. We would love to hear from you. And potentially, you could do a talk here as well. **Ian Ward**: The format kind of today is we're going to hear from Jason Flax. And actually, I should introduce myself. I'm Ian Ward. I do product at MongoDB but I focus on the Realm SDK, so all of our free and opensource community SDKs as well as the synchronization product. I came over the with the Realm acquisition approximately almost two years ago now. And so I've been working on mobile application architecture for the last several years. Today, we're going to hear from Jason Flax who is our lead iOS engineer. The iOS team has been doing a ton of work on making our SwiftUI integration really great with Realm. So he's going to talk about. He's also going to kind of give a quick rundown of SwiftUI. I'll let him get into that. **Ian Ward**: But if you could please, just in terms of logistics, mute yourself during the presentation, he has a lot to go through. This will be recorded so we'll share this out later. And then at the end, we'll save some time for Q&A, you can ask questions. We can un-mute, we can answer any questions you might have. If you have questions during the presentation, just put them in the chat and we'll make sure to get to them at the end. So without further adieu, Jason, if you want to kick it off here and maybe introduce yourself. **Jason Flax**: Sure thing. Yeah, thanks for the intro there. So I'm Jason Flax. I am the lead of the Cocoa team or the iOS team. I did not come along with the Realm acquisition, I was at Mongo previously. But the product that I was working on Stitch largely overlapped with the work of Realm and it was a natural move for me to move over to the Cocoa team. **Jason Flax**: It's been a great, has it been two years? I actually don't even know. Time doesn't mean much right now. But yeah, it's been a lot of fun. We've been working really hard to try to get Realm compatible with SwiftUI and just working really well. And I think we've got things in a really nice place now and I'm pretty excited to show everyone in the presentation. I'll try to move through it quickly though because I do want to get time for questions and just kind of the mingling that you normally have at an actual user group. Cool. **Ian Ward**: Perfect. Thanks Jason. Yeah, normally we would have refreshments and pie at the end. But we'll have to settle for some swag. So send out a link in the chat if you want to fill it out to get some swag, it'd be great. Thank you for attending and thank you Jason. **Jason Flax**: Cool. I'll start sharing this. Let's see where it's at. Can people see the presentation? **Ian Ward**: I can. You're good. **Jason Flax**: Cool. All right, well, this is SwiftUI and Realm best practices. I am Jason Flax lead engineer for the Realm Cocoa team. All very self explanatory. Excited to be here, as I said. Let's get started. Next. Cool, so the agenda for today, why SwitftUI, SwiftUI basics which I'll try to move through quickly. I know you all are very excited to hear me talk about what VStack is. Realm models, how they actually function, what they do, how they function with SwiftUI and how live objects function with SwiftUI since it's a very state based system. SwiftUI architecture which expect that to be mildly controversial. Things have changed a lot since the old MVC days. A lot of the old architectures, the three letter, five letter acronyms, they don't really make as much sense anymore so I'm here to talk about those. And then the Q&A. **Jason Flax**: Why SwiftUI? I'm sure you all are familiar with this. That on the right there would actually be a fairly normal storyboard, as sad as that is. And below would be a bit of a merge conflict there between the underlying nib code which for anybody that's been doing iOS development for a while, it's a bit of a nightmare. I don't want to slag UI kit too hard, it actually is really cool. I think one of my favorite things about UI kit was being able to just drag and drop elements into code as IBOutlets or so on. **Jason Flax**: I've actually used that as means to teach people programming before because it's like, oh right, I have this thing on this view, I drag and drop it into the code. That code is the thing. That code is the thing. That's a really powerful learning tool and I think Apple did a great job there. But it's kind of old. It didn't necessarily scale well for larger teams. If you had a team of four or five people working on an app and you had all these merge conflicts, you'd spend a full day on them. Not to mention the relationships between views being so ridiculously complex. It's the stuff of nightmares but it's come a long way now. SwiftUI, it seems like something they've been building towards for a really long time. **Jason Flax**: Architectures, this is what I was just talking about. UI kit, though it was great, introduced a lot of complex problems that people needed to solve, right? You'd end up with all of this spaghetti code trying to update your views and separate the view object from the business logic which is something you totally need to do. But you ended up having to figure out clever ways to break these pieces up and connect all the wires. The community figured out various ways. Some working better for others. Some of them were use-case based. The problem was if you actually adhered to any of them, with the exception of maybe MVC and MVI which we'll talk about later, you'd end up with all of these neat little pieces, like Legos even but there's a lot of boilerplate. And I'd say you'd kind of have gone from spaghetti code to ravioli code which is a whole different set of problems. I'll talk later on about what the new better architectures might be for SwiftUI since you don't need a lot of these things anymore. **Jason Flax**: Let's go over the basics. This is an app. SwiftUI.app there is a class that is basically replacing app delegate, not a class, protocol, sorry. It's basically replacing the old app delegate. This isn't a massive improvement. It's removing, I don't know, 10 lines of code. It's a nice small thing, it's a visual adjustment. It sets the tone I guess of for the rest of SwiftUI. Here we are, @main struct app, this is my app, this is my scene, this is my view. There you go, that's it. If you still need some of the old functionality of SwiftUI, or sorry not SwiftUI, app delegate, there is a simple property wrapper that you just add on the view, it's called UI application development adaptor. That'll give a lot of the old features that you probably still will need in most iOS apps. **Jason Flax**: Moving on, content view. This is where the meat of the view code is. This is our first view. Content view as itself is not a concept. It happens to be when I'm in my view, it is very descriptive of what this is. It is the content. Basically the rest of the SwiftUI presentation is going to be me stepping through each of the individual views, explaining them, explaining what I'm doing and how they all connect together. On the right, what we're building here is a reminders' app. Anybody here that has an iPhone has a reminders' app. This is obviously, doesn't have all the bells and whistles that that does but it's a good way to show what SwiftUI can do. **Jason Flax**: The navigation view is going to be many people's top level view. It enables a whole bunch of functionality in SwiftUI, like edit buttons, like navigation links, like the ability to have main and detailed views and detailed views of detailed views. All the titles are in alignment. As you can see, that edit button, which I am highlighting with my mouse, that's actually built in. That is the edit button here. That will just sort of automagically enable a bunch of things if you have it in your view hierarchy. This is both a really cool thing and somewhat problematic with SwiftUI. There is a load of implicit behavior that you kind of just need to learn. That said, once you do learn it, it does mean a lot less code. And I believe the line goes, the best line of code is the one that's never been written. So less code the better as far as I'm concerned so let's dig in. **Ian Ward**: That's right. **Jason Flax**: It's one of my favorites. Cool, so let's talk about the VStack. Really straightforward, not gong to actually harp on this too long. It's a vertical stack views, it's exactly what it sounds like. I suppose the nice thing here is that this one actually is intuitive. You put a bunch of views together, those views will be stacked. So what have here, you have the search bar, the search view, the reminder list results view. Each one of these is a reminder list and they hook into the Realm results struct, which I'll dig into later. A spacer, which I'll also dig into a bit, and the footer which is just this add list button and it stacks them vertically. **Jason Flax**: The spacers a weird thing that has been added which, again, it's one of these non-intuitive things that once you learn it, it's incredibly useful. Anybody familiar with auto layout and view constraints will immediately sort of latch onto this because it is nice when you get it right. The tricky part is always getting it right. But it's pretty straightforward. It creates space between views. In this case, what it's literally doing is pushing back the reminder list results view in the footer, giving them the space needed so that if this... it knocks the footer to the bottom of the view which is exactly what we want. And if the list continues to grow, this inner, say, view, will be scrollable while the footer stays at the bottom. **Jason Flax**: Right, @State and the $ operator. This is brand new. It is all tied to property wrappers. It was introduced alongside SwiftUI though they are technically separate features. @STate is really cool. Under the hood what's happening is search filter isn't actually baked into the class that way. When this code compiles, there actually will be an underscore search filter on the view that is the string. The search filter you see here, the one that I'm referencing several times in the code, that is actually the property or the @State. And @State is a struct that contains reference based storage to this sting that I can modify it between views. And when the state property is updated, it will actually automatically update the view because state inherits from a thing call dynamic property which allows you to notify the view, right, this thing has changed, please update my view. **Jason Flax**: And as you can see on the right side here, when I type into the search bar, I type ORK, O-R-K, which slowly narrows down the reminder list that we have available. It does that all kind of magically. Basically, in the search view I type in, it passes that information back to state which isn't technically accurate but I'll explain more later. And then will automatically update the results view with that information. **Jason Flax**: The $ operator is something interesting out of property wrappers as well. All property wrappers have the ability to project a value. It's a variable on the property wrapper called projected value. That can be any type you want. In the case of @State, it is a binding type. Binding is going to encapsulate the property. It's going to have a getter and setter and it' going to do some magic under the hood that when I pass this down to the views, it's going to get stored in the view hierarchy. And when I modify it, because it holds the same let's say reference and memory as @State, @State is going to know, okay, I need to update the view now. We're going to see the $ operator a bunch here. I'll bring it up several times because Realm is now also taking advantage of this feature. **Jason Flax**: Let's dig into custom views. SwiftUI has a lot of really cool baked in functionality. And I know they're doing another release in June and there's a whole bunch of stuff coming down the pipeline for it. Not every view you'd think would exist exists. I really wanted a simple clean Appley looking search view in this app so I had to make my own custom class for it. You'll also notice that I pass in the search filter which is stored as a state variable on the content view. We'll get to that in a moment. This is the view. Search view, there's a search filter on top. This is the @binding I was talking about. We have a verticle stack of views here. Let's dig in. **Jason Flax**: The view stack actually just kind of sets up the view. This goes back into some of the knowledge that you just kind of have to gain about how spacers work and how all of these stacks work. It's aligned in the view in a certain way that it fills in the view in the way that I want it too. It's pretty specific to this view so I'm not going to harp on it too long. HStack is the opposite of a VStack, is a horizontal stack of views. The image here will trail the text field. If you have a right to left language, I'm fairly certain it also switches to which is pretty cool. Everything is done through leading and trailing, not left and right so that you can just broadly support anything language based. Cool. **Jason Flax**: This is the image class. I actually think this is a great addition. They've done a whole bunch of cool stuff under the hood with this. Really simple, it's the magnifying glass that is the system name for the icon. With SwiftUI, Apple came out with this thing called SF Symbols. SF stands for San Francisco there, it ties to their San Francisco font. They came out with this set of icons, 600 or so, I'm not sure the exact number, that perfectly align themselves with the bog standard Swift UI views and the bog standard Apple fonts and all that kind of thing. You can download the program that lets you look at each one you have and see which ones you have access to. It's certainly not a secret. And then you can just access them in your app very simply as so. **Jason Flax**: The cool thing here that I like is that I remember so many times going back and forth back when I was doing app development with our designers, with our product team of I need this thing to look like this. It's like, right, well, that's kind of non-standard, can you provide us with the icon in the 12 different sizes we needed. I need it in a very specific format, do you have Sketch? You have Sketch, right? Cool, okay, Sketch, great. There's no need for that anymore. Of course there's still going to be uses for it. It's still great for sketching views and creating designs. But the fact that Apple has been working towards standardizing everything is great. It still leaves room for creativity. It's not to say you have to use these things but it's a fantastic option to have. **Jason Flax**: This is your standard text field. You type in it. Pretty straightforward. The cool thing here ties back to the search filter. We're passing that binding back in, that $ operator. We're using it again. You're going to keep seeing it. When this is modified, it updates the search filter and it's, for a lack of better way to put it, passed back between the views. Yeah. This slide is basically, yeah, it's the app binding. **Jason Flax**: Let's dig into the models. We have our custom view, we have our search view. We now have our reminder view that we need to dig into to have any of this working. We need our data models, right? Our really basic sort of dummy structures that contain the data, right? This is a list of reminders. This is the name of that list, the icon of that list. This is a reminder. A reminder has a name, a priority, a date that it's due, whether or not it's complete, things like that, right? We really want to store this data in a really simple way and that's whee Realm comes in handy. **Jason Flax**: This is our reminder class. I have it as an embedded object. Embedded objects are a semi-new feature of Realm that differ from regular objects. The long story short is that they effectively enable cascade and deletes which means that if you have a reminder list and you delete it, you really wouldn't want your reminder still kind of hanging out in ether not attached to this high level list. So when you delete that list, EmbeddedObject enables it just be automatically nixed from the system. Oops, sorry, skipped a slide there. **Jason Flax**: RealmEnum is another little interesting thing here. It allows you to store enums in Realm. Unfortunately the support is only for basic enums right now but that is still really nice. In this case, the enum is of a priority. Reminders have priorities. There's a big difference between taking out the trash and, I don't know, going for a well checkup for the doctor kind of thing. Pretty standard stuff. Yeah. **Jason Flax**: ObjectKeyIdentifiable is something that we've also introduced recently. This one's a little more complex and it ties into combine. Combine is something I haven't actually said by name yet but it's the subscription based framework that Apple introduced that hooks into SwiftUI that enables all of the cool automatic updates of views. When you're updating a view, it's because new data is published and then you trigger that update. And that all happens through combine. Your data models are being subscribed to by the view effectively. What ObjectKeyIdentifiable does is that it provides unique identifier for your Realm object. **Jason Flax**: The key distinction there is that it's not providing an identifier for the class, it's providing an identifier for the object. Realm objects are live. If I have this reminder in memory here and then it's also say in a list somewhere, in a result set somewhere and it's a different address in memory, it's still the same object under the hood, it's still the same persisted object and we need to make sure that combine knows that. Otherwise, when notifying the view, it'll notify the view in a whole bunch of different places when in reality, it's all the same change. If I change the title, I only want it to notify the view once. And that's what ObjectKeyIdentifiable does. It tells combine, it tells SwiftUI this is the persisted reminder. **Jason Flax**: Reminder list, this is what I was talking about before. It is the reminder list that houses the reminders. One funny thing here, you have to do RealmSwift.List if you're defining them in the same file that you're importing SwiftUI. We picked the name List for lists because it was not array and that was many, many years ago but of course SwiftUI has come up with their own class called Lists. So just a little tidbit there. If you store your models in different classes, which for the sake of a non-demo project you would probably do, it probably won't be an issue for you. But just a funny little thing there. One thing I also wanted to point out is the icon field. This ties back to system name. I think it's really neat that you can effectively store UI based things in Realm. That's always been possible but it's just a lot neater now with all of the built in things that they supply you with. **Jason Flax**: Let's bring it way back to the content view. This again is the top level view that we have and let's go into the results list, ReminderListResultsView and see what's happening there. So as you can see, we're passing in again that $searchFilter which will filter this list which I'm circling. This is the ReminderListResultsView, it's a bit more code. There's a little bit more going on here but still 20 lines to be able to, sorry, add lists, delete lists, filter lists, all that kind of thing. So list and for each, as I was just referencing, have been introduced by SwiftUI. There seems to be a bit of confusion about the difference between the two especially with the last release of SwiftUI. At the beginning it was basically, right, lists are static data. They are data that is not going to change and ForEach is mutable data. That's still the general rule to go by. The awkward bit is that ForEach still needs to be nested in lists and lists can actually take mutable data now. I'd say in the June release, they'll probably clean this up a bit. But in the meantime, this is just kind of one of those, again, semi non-intuitive things that you have do with SwiftUI. **Jason Flax**: That said, if you think about the flip side and the nice part of this to spin it more positively, this is a table view. And I'm sure anybody that's worked with iOS remembers how large and cumbersome table views are. This is it. This is the whole table view. Yeah, it's a little non-intuitive but it's a few lines of code. I love it. It's a lot less thinking and mental real estate for table views to be taking up. **Jason Flax**: NavigationLink is also a bunch of magic sauce that ties back to the navigation view. This NavigationLink is going to just pass us to the DetailView when tapped which I'll display in a later slide. I'm going to tap on one of these reminder lists and it's going to take me to the detailed view that actually shows each reminder. Yeah. **Jason Flax**: Tying this all back with the binding, as you can see in the animation as we showed before, I type, it changes the view automatically and filters out the non-matching results. In this case, NSPredicate is the predicate that you have to create to actually filter the view. The search filter here is the binding that we passed in. That is automatically updated from the search view in the previous slides. When that changes, whenever this search filter is edited, it's going to force update the view. So basically, this is going to get called again. This filter is going to contain the necessary information to filter out the unnecessary data. And it's all going to tie into this StateRealmObject thing here. **Jason Flax**: What is StateRealmObject? This is our own homegrown property wrapper meant and designed in a way to mimic the way that state objects work in SwiftUI. It does all the same stuff. It has heap based storage that stores a reference to the underlying property. In this case, this is a fancy little bit of syntactic sugar to be able to store results on a view. Results is a Realm type that contains, depending on the query that you provide, either entire view of the table or object type or class type that you're looking up. Or that table then queried on to provide you with what you want which is what's happened here with the filter NSPredicate. This is going to tie into the onDelete method. Realm objects function in a really specific way that I'll get to later. But because everything is live always with Realm, we need to store State slightly differently than the way that the @State and @ObservedObject property wrappers naturally do. **Jason Flax**: OnDelete is, again, something really cool. It eliminates so much code as you can see from the animation here. Really simple, just swipe left, you hit delete or swipe right depending on the language and it just deletes it. Simple as. The strange non-intuitive thing that I'll talk about first is the fact that that view, that swipe left ability is enabled simply by adding this onDelete method to the view hierarchy. That's a lot of implicit behavior. I'm generally not keen on implicit behavior. In this case, again, enabling something really cool in a small amount of code that is just simply institutional knowledge that has to be learned. **Jason Flax**: When you delete it, and this ties into the StateRealmObject and the $ remove here, with, I suppose it'll be the next update, of Realm Swift, the release of StateRealmObject, we are projecting a binding similar to the way that State does. We've added methods to that binding to allow you to really simply remove, append and move objects within a list or results depending on your use case. What this is doing is wrapping things in a write transaction. So for those that are unfamiliar with Realm, whenever you modify a managed type or a managed object within Realm, that has to be done within a write transaction. It's not a lot of code but considering SwiftUI's very declarative structure, it would be a bit frustrating to have to do that all over your views. Always wrapping these bound values in a write transaction. So we provided a really simple way to do that automagically under the hood, $ property name.remove, .append, .whatever. That's going to properly remove it. **Jason Flax**: In this case because it's results, it's going to just remove the object from the table. It's going to notify the view, this results set, that, right, I've had something removed, you need to update. It's going to refresh the view. And as you can see, it will always show the live state of your database of the Realm which is a pretty neat thing to sort of unlock here is the two-way data binding. **Jason Flax**: ReminderListRowView, small shout out, it's just the actual rows here. But digging into it, we're going to be passing from the ForEach each one of these reminder lists. Lists is kind of an unfortunate name because lists is also a concept in Realm. It's a group of reminders, the list of reminders. We're going to pass that into the row view. That is going to hydrate this ObservedRealmObject which is the other property wrapper I mentioned. Similarly to ObservedObject which anyone that's worked with SwiftUI so far has probably encountered ObservedObject, this is the Realm version. This does some special things which I'm happy to talk about in the Q&A later. But basically what this is doing is again binding this to the view. You can use the $ operator. In this case, the name of the reminder list is passed into a TextField. When you edit that, it's going to automatically persist to the realm and update the view. **Jason Flax**: In my head I'm currently referring to this as a bound property because of the fact that it's a binding. Binding is a SwiftUI concept that we're kind of adopting with Realm which had made things work easy peasy. So stepping back and going into the actual ReminderListView which is the detailed view that the navigation link is going to send us to, destination is a very accurate parameter name here. Let's dig in. Bit more code here. This is your classic detailed view, right? There's a back button that sends you back to the reminders view. There's a couple other buttons here in the navigation view. Not super happy that edit's next to add but it was the best to do right now. This is going to be the title of the view, work items. These are things I have to do for work, right? Had to put together this SwiftUI talk, had to put together property wrappers. And all my chores were done as you saw on the other view so here we are. Let's take a look. **Jason Flax**: Really simple to move and delete things, right? So you hit the edit button, you move them around. Very, very similar to OnDelete. SwiftUI recognizes that this onMove function has been appended to the view hierarchy and it's going to add this ability which you can see over here on the right when the animation plays, these little hamburger bars. It enables those when you add that to the view. It's something throws people off a lot. There's a load of of stack overflow questions like how do I move things, how do I move things? I've put the edit button on, et cetera. You just add the onMove function. And again, tying back to the ObservedRealmObject that we spoke about in the ReminderListRowView, we added these operators for you from the $ operator, move and remove to be able to remove and delete objects without having to wrap things in a write transaction. **Jason Flax**: And just one last shout out to those sort of bound methods here. In this case, we hit add, there's a few bits of code here that I can dig into later or we can send around the deck or whatever if people are interested in what's going on because there is sort of a custom text field here that when I edit it's focused on. SwiftUI does not currently offer the ability to manually focus views so I had to do some weird stuff with UIViewRepresentable there. The $ append is doing exactly what I said. It's adding a brand new reminder to the reminder list without having to write things in a realm, wrap things in a write transaction, sorry. **Jason Flax**: What I've kind of shown there is how two-way data binding functions with SwiftUI, right? Think about what we did. We added to a persisted list, we removed from a persisted list. We moved objects around a persisted list, we changed fields in persisted objects. But didn't actually have to do anything else. We didn't have to have a V-model, we didn't have to abstract anything else out. I know, of course, this is a very, very simple application. It's a few hundred lines of code, not even, probably like 200. Of course as your application scales, if you have a chat app, which our developer relations team is currently working on using Realm in SwiftUI, you have to manage a whole bunch more State. You're probably going to have a large ObservedObject app state class that most SwiftUI apps still need to have. But otherwise, it doesn't seem to make much sense to create these middle layers when Realm offers the ability to just keep everything live and fresh at all times. **Jason Flax**: I'm going to kind of take a stance on architectures here and say that most of them kind of go away. I'd say many people here would be familiar with MVC, right? There's a view, there's a controller, there's a model. But even just briefly talking a look at these arrows and comparing them to the example that I just went through, these don't really apply anymore. You don't need the controller to send updates to the model because the models being updated by the view, right? And the views not even really updating the model. The models just being updated. And it certainty doesn't need to receive State from the view because it's stateful itself. It has it's state, it is live. It is ready to go. So this whole thing, there is no controller anymore. It eliminates the C in MVC. In which case I say dump it. There's no need for it anymore. I'm over it, I don't want to hear it MVC again. I'm also just kidding. Of course there still will be uses for it but I don't think it has much for us. **Jason Flax**: MVVM, same thing. It's a bit odd when you have Realm live objects to want to notify the models of updates when nine times out of 10 you probably want the model to just immediately receive those updates. There are still a couple of use cases where say you have a form, you want something not persisted to the Realm. Maybe you want to save some kind of draft state, maybe you don't want everything persisted immediately. Or maybe your really complex objects that you don't want to automatically use these bound write transactions on because that can be pretty expensive, right? There are still use cases where you want to abstract this out but I cannot see a reason why you should use V-models as a hard and fast rule for your code base. In which case I would say, throw it away, done with it. Again MVVM, not really very useful anymore. **Jason Flax**: Viper is another very popular one. I think this one's actually a bit newer because not really anybody was talking about it back when I was doing iOS around a few years ago for actual UI applications. It's view interactor presenter entity router, it certainly doesn't roll off the tongue nicely though I suppose Viper's meant to sound bad ass or something. But I actually think this one worked out pretty well for the most part for UI. It created these really clear-cut relationships and offered a bit of granularity that was certainly needed that maybe MVVM or MVC just couldn't supply to somebody building an app. But I don't really think it fits with UI. Again, you'll end up with a bunch of ravioli code, all these neat little parts. **Jason Flax**: The neat little parts of SwiftUI should be all the view components that your building and the models that you have. But creating things like routers and presenters doesn't really make sense when it's all just kind of baked in. And it's a concept that I've had to get used to, oh right, all this functionality is just baked in. It's just there, we just have to use it. So yeah, doesn't remotely sound correct anymore for our use case. I don't generally think you should use it with SwiftUI. You absolutely can but I know that we actually played with it in-house to see, right, does this makes sense? And you just end up with a ton of boilerplate. **Jason Flax**: So this is MVI, Model View Intent. If I'm not mistaken, this slide was actually presented and WWDC. This is what I'm proposing as the sort of main architecture to use for SwiftUI apps mainly because of how loosely it can be interpreted. So even here the model is actually state. So your data models aren't even mentioned on this graphic, they're kind of considered just part of the state of the application. Everything is state based and because the view and the state have this two-way relationship, everything is just driven by user action, right? The user action is what mutates the models which mutate the view which show the latest and greatest, right? So personally, I think this the way to go. It keeps things simple. Keeping it simple is kind of the mantra of SwiftUI. It's trying to abstract out two decades, three decades of UI code to make things easier for us. So why makes things more difficult, right? **Jason Flax**: Thank you very much everyone. Thanks for hearing me out, rambled about architectures. That's all. Just wanted to give a quick shout out to some presentations coming down the line. Nichola on the left there is going to give a presentation on Xamarin Guidance with Realm using the .Net SDK. And Andrew Morgan on the right is going to show us how to use Realm Sync in a real live chat application. The example that I've shown there is currently, unfortunately on a branch. It will eventually move to the main branch. But for now it's there while it's still in review. And yeah, thanks for your time everyone. **Ian Ward**: Great. Well Jason, thank you so much. That was very enlightening. I think we do have a couple questions here so I think we'll transition into Q&A. I'll do the questions off the chat first and then we can open it up for other questions if they come to you. First one there is the link to the code somewhere? So you just saw that. Is it in the example section of the Realm Cocoa repo or on your branch or what did you- **Jason Flax**: It is currently in a directory called SwiftUI TestToast. It will move to the examples repo and be available there. I will update the code after this user group. **Ian Ward**: Awesome. The next question here is around the documentation for all the SwiftUI constructs like State, Realm, Object and some of the property wrappers we have there. I don't know if you caught it yet but I guess this hasn't been released yet. This is the pre new release. You guys are getting the preview right now of the new hotness coming out. Is that right? **Jason Flax**: That is correct. Don't worry, there will be a ton of documentation when it is released. And that's not just this thing does this thing, it will also be best practices with it. There's some implicit reasons why you might want to use StateRealmObject verses ObservedRealmObject. But it's all Opensource, it's all available. You'll be able to look at it. And of course we're always available on GitHub to chat about it if the documentation isn't clear. **Ian Ward**: Yeah, and then maybe you could talk a little bit about some of the work that the Cocoa team has done to kind of expose this stuff. You mentioned property wrappers, a lot of that has to do with not having to explicitly call Realm.write in the view. But also didn't we do stuff for sync specific objects? We had the user and the app state and you made that as part of ObservableObject, is that right? **Jason Flax**: Correct, yeah. I didn't have time to get to sync here unfortunately. But yes, if you are using MongoDB Realm, which contains the sync component of Realm, we have enabled it so that the app class and the user class will also automatically update the view state similar to what I presented earlier. **Ian Ward**: Awesome. And then, I think this came up during some of your architecture discussions I believe around MVC. Question is from Simon, what if you have a lot of writes. What if you have a ton of writes? I guess the implication here is that you can lag the UI, right, if you're writing a lot. So is there any best practices around that? Should we be dispatching to the background? How do you think about that? **Jason Flax**: Yeah, I would. That would be the first thing if you are doing a ton of writes, move them off to a background queue. The way that I presented to use Realm and SwiftUI is the lowest common denominator, simplest way bog standard way for really simple things, right? If you do have a ton of writes, you're not locked into any of this functionality. All of the old Realm API is still there, it's not old, it's the current API, right? As opposed to doing $ list.append or whatever, if you have 1,000 populated objects ready to hop in that list, all of those SwiftUI closures that I was kind of supplying a method to, you can just do the Realm.write in there. You can do it as you would normally do it. And as your app grows in complexity, you'll have to end up doing that. As far as the way that you want to organize your application around that, one thing to keep in mind here, SwiftUI is really new. I don't know how many people are using it in production yet. Best practices with some of this stuff is going to come in time as more people use it, as more ideas come about. So for now, yeah, I would do things the old way when it comes to things like extensive writes. **Ian Ward**: Yeah, that's fair. Simon, sorry I think you had a followup question here. Do you want to just unmute yourself and maybe discuss a little bit about what you're talking about with the write transaction? I can ask to unmute, how do I do that? **Jason Flax**: It seems the question is about lag, it's about the cutoffs, I don't need a real time sync. Okay, yeah, I can just answer the question then, that's no bother. **Ian Ward**: I think he's referring to permitting a transaction for a character stroke. I don't know if we would really look to, that would probably not be our best practice or how would you think about that for each character. **Jason Flax**: It would depend. Write transactions aren't expensive for something simple like string on a view. Now it seems like if, local usage, okay. If you're syncing that up to the server, yes, I would not recommend committing a write transaction on each keystroke but it isn't that expensive to do. If you do want to batch those, again, that is available for you to do. You can still mess with our API and play around to the point where, right, maybe you only want to batch certain ones together. **Jason Flax**: What I would do in that case if you are genuinely worried about performance, I would not use a string associated with your property. I would pass in a plain old string and observe that string. And whenever you want to actually commit that string, depending on let's say you want every fifth keystroke, I wouldn't personally use that because there's not really a rhyme or reason for that. But if you wanted that, then you monitor it. You wait for the fifth one and then you write it to the Realm. Again, you don't have to follow the rules of writing on every keystroke but it is available to people that want it. **Ian Ward**: Got it. Yeah, that's important to note here. Some of the questions are when are we going to get the release? I think \crosstalk 00:42:56\] chomping on the bit here. And then what version are we thinking this will be released? **Jason Flax**: I don't think it would be a major bump as this isn't going to break the existing API. So it'll probably be 10.6. I still have to consult with the team on that. But my guess would be 10.6 based on the current versioning from... As far as when. I will not vaguely say soon, as much as I want to. But my guess would be considering that this is already in review, it'll be in the next week or two. So hold on tight, it's almost there. **Ian Ward**: And then I think there's a question here around freezing. And I guess we haven't released a thaw API but all of that is getting, the freezing and thawing is getting wrapped in these property wrappers. Is that what we're doing, right? **Jason Flax**: Correct, yeah. Basically because SwiftUI stores so much State, you actually need to freeze Realm objects before you pass them into the views? Why is that? If you have a list of things, SwiftUI keeps a State of that list so that it can diff it against changes. The problem is RealmObjects and RealmLists, they're all live. SwiftUI actually cannot diff the changes in a list because it's just going to see it as the same exact list. It also presented itself in a weird way where if you deleted something from the list, because it could cache the old version of it, it would crash the app because it was trying to render an index of the list that no longer exists. So what we're doing under the hood, because previously you had to freeze your list, you had to thaw the objects that come out of the list and then you could finally operate on them, introduced a whole bunch of complexity that we've now abstracted out with these property wrappers. **Ian Ward**: And we have some questions around our build system integration, Swift Package Manager, CocoaPods, Carthage. Maybe you want to talk a little bit about some of the work that we've done over the last few months. I know it was kind of a bear getting into SPM but I feel like we should have full Swift Package Managed Support. Is that right? **Jason Flax**: We do, yeah. Full SPM support. So the reason that that's changed for us is because previously our sync client was closed source. It's been open sourced. I probably should not look at the chat at the same time as talking. Sorry. It's become open source now. Everything is all available to be viewed, as open source projects are. That change enabled us to be able to use SPM properly. So basically under the hood SPM is downloading the core dependency and then supplying our source files and users can just use it really simply. Thanks for the comments about the hair. **Jason Flax**: The nice thing is, so we're promoting SPM as the main way we want people to consume Realm. I know that that's much easier said than done because so many applications are still reliant on CocoaPods and Carthage. Obviously we're going to continue to support them for as long as they're being used. It's not even a question of whether or not we drop support but I would definitely recommend that if you are having trouble for some reason with CocoaPods or Carthage, to start moving over to SPM because it's just so much simpler. It's so much easier to manage dependencies with and doesn't come with the weird cost of XE work spaces and stale dependencies and CocoaPod downloads which can take a while, so yeah. **Ian Ward**: I think unfortunately, part of it was that we were kind of hamstrung a little bit by the CocoaPods team, right? They had to add a particular source code for us and then people would open issues on our GitHub and we'd have to send them back. It's good that now we have a blessed installable version of Swift Package Manger so I think hopefully will direct people towards that. Of course, we'd love to continue to support CocoaPods but sometimes we get hamstrung by what that team supports. So next question here is regarding the dependencies. So personally, I like keeping my dependencies in check. I usually keep Realm in a separate target to make my app not aware of what persistence I use. So this is kind of about abstracting away. What you described in the presentation it seems like you suggest to integrate Realm deeply in the UI part of the app. I was thinking more about using publishers with Realm models, erase the protocol types instead of the integrating Realm objects with the RealmStateObject inside of my UI. Do you have any thoughts on that Jason? **Jason Flax**: I was thinking about using publishers with Realm models, erase the protocol types, interesting. I'm not entirely sure what you mean Andre about erasing them to protocol types and then using the base object type and just listening to changes for those. Because it sounds like if that's what you're doing, when RealmStateObject, ObservedRealmObject are release, it seems like it would obviate the need for that. But I could also be misunderstanding what you're trying to do here. Yeah, I don't know if you have a mic on or if you want to followup but it does seem like the feature being released here would obviate the need for that as all of the things that would need to listen to are going to be updating the view. I suppose there could be a case where if you want to ignore certain properties, if there are updates to them, then maybe you'd want some customization around that. And maybe there's something that we can release feature-wise there to support that but that's the only reason I could think why you'd want to abstract out the listening part of the publishers. **Ian Ward**: Okay, great. Any other questions? It looks like a couple questions have been answered via the chat so thank you very much. Any other questions? Anyone else have anything? If not, we can conclude. Okay, great. Well, thank you so much Jason. This has been great. If you have any additional questions, please come to our forums, forums.realm.io, you can ask them there. Myself and Jason and the Cocoa team are on there answering questions so please reach out to us. You can reach out on our Twitter @Realm and yeah, of course on our GitHub as well Realm-cocoa. Thank you so much and have a great rest of your week. **Jason Flax**: Thanks everyone. Thanks for tuning in. Throughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our [Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months. To learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.
md
{ "tags": [ "Realm", "Swift" ], "pageDescription": "Missed the first of our new Realm Meetups on SwiftUI Best Practices with Realm? Don't worry, you can catch up here.", "contentType": "Article" }
SwiftUI Best Practices with Realm
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/serverless-development-lambda-atlas
created
# Write A Serverless Function with AWS Lambda and MongoDB The way we write code, deploy applications, and manage scale is constantly changing and evolving to meet the growing demands of our stakeholders. In the past, companies commonly deployed and maintained their own infrastructure. In recent times, everyone is moving to the cloud. The cloud is pretty nebulous (heh) though and means different things to different people. Maybe one day in the future, developers will be able to just write code and not worry about how or where it's deployed and managed. That future is here and it's called **serverless computing**. Serverless computing allows developers to focus on writing code, not managing servers. Serverless functions further allow developers to break up their application into individual pieces of functionality that can be independently developed, deployed, and scaled. This modern practice of software development allows teams to build faster, reduce costs, and limit downtime. In this blog post, we'll get a taste for how serverless computing can allow us to quickly develop and deploy applications. We'll use AWS Lambda as our serverless platform and MongoDB Atlas as our database provider. Let's get to it. To follow along with this tutorial, you'll need the following: - MongoDB Atlas Account (Sign up for Free) - AWS Account - Node.js 12 >MongoDB Atlas can be used for FREE with a M0 sized cluster. Deploy MongoDB in minutes within the MongoDB Cloud. Learn more about the Atlas Free Tier cluster here. ## My First AWS Lambda Serverless Function AWS Lambda is Amazon's serverless computing platform and is one of the leaders in the space. To get started with AWS Lambda, you'll need an Amazon Web Services account, which you can sign up for free if you don't already have one. Once you are signed up and logged into the AWS Management Console, to find the AWS Lambda service, navigate to the **Services** top-level menu and in the search field type "Lambda", then select "Lambda" from the dropdown menu. You will be taken to the AWS Lambda dashboard. If you have a brand new account, you won't have any functions and your dashboard should look something like this: We are ready to create our first serverless function with AWS Lambda. Let's click on the orange **Create function** button to get started. There are many different options to choose from when creating a new serverless function with AWS Lambda. We can choose to start from scratch or use a blueprint, which will have sample code already implemented for us. We can choose what programming language we want our serverless function to be written in. There are permissions to consider. All this can get overwhelming quickly, so let's keep it simple. We'll keep all the defaults as they are, and we'll name our function **myFirstFunction**. Your selections should look like this: - Function Type: **Author from scratch** - Function Name: **myFirstFunction** - Runtime: **Node.js 12.x** - Permissions: **Create a new role with basic Lambda permissions**. With these settings configured, hit the orange **Create function** button to create your first AWS Lambda serverless function. This process will take a couple of seconds, but once your function is created you will be greeted with a new screen that looks like this: Let's test out our function to make sure that it runs. If we scroll down to the **Function code** section and take a look at the current code it should look like this: ``` javascript exports.handler = async (event) => { // TODO implement const response = { statusCode: 200, body: JSON.stringify('Hello from Lambda!'), }; return response; }; ``` Let's hit the **Test** button to execute the code and make sure it runs. Hitting the **Test** button the first time will ask us to configure a test event. We can keep all the defaults here, but we will need to name our event. Let's name it **RunFunction** and then hit the **Create** button to create the test event. Now click the **Test** button again and the code editor will display the function's execution results. We got a successful response with a message saying **"Hello from Lambda!"** Let's make an edit to our function. Let's change the message to "My First Serverless Function!!!". Once you've made this edit, hit the **Save** button and the serverless function will be re-deployed. The next time you hit the **Test** button you'll get the updated message. This is pretty great. We are writing Node.js code in the cloud and having it update as soon as we hit the save button. Although our function doesn't do a whole lot right now, our AWS Lambda function is not exposed to the Internet. This means that the functionality we have created cannot be consumed by anyone. Let's fix that next. We'll use AWS API Gateway to expose our AWS Lambda function to the Internet. To do this, scroll up to the top of the page and hit the **Add Trigger** button in the **Designer** section of the page. In the trigger configuration dropdown menu we'll select **API Gateway** (It'll likely be the first option). From here, we'll select **Create an API** and for the type, choose **HTTP API**. To learn about the differences between HTTP APIs and REST APIs, check out this AWS docs page. For security, we'll select **Open** as securing the API endpoint is out of the scope of this article. We can leave all other options alone and just hit the **Add** button to create our API Gateway. Within a couple of seconds, we should see our Designer panel updated to include the API Gateway we created. Clicking on the API Gateway and opening up details will give us additional information including the URL where we can now call our serverless function from our browser. In my case, the URL is . Navigating to this URL displays the response you'd expect: **Note:** If you click the above live URL, you'll likely get a different result, as it'll reflect a change made later in this tutorial. We're making great progress. We've created, deployed, and exposed a AWS Lambda serverless function to the Internet. Our function doesn't do much though. Let's work on that next. Let's add some real functionality to our serverless function. Unfortunately, the online editor at present time does not allow you to manage dependencies or run scripts, so we'll have to shift our development to our local machine. To keep things concise, we'll do our development from now on locally. Once we're happy with the code, we'll zip it up and upload it to AWS Lambda. This is just one way of deploying our code and while not necessarily the most practical for a real world use case, it'll make our tutorial easier to follow as we won't have to manage the extra steps of setting up the AWS CLI or deploying our code to GitHub and using GitHub Actions to deploy our AWS Lambda functions. These options are things you should explore when deciding to build actual applications with serverless frameworks as they'll make it much easier to scale your apps in the long run. To set up our local environment let's create a new folder that we'll use to store our code. Create a folder and call it `myFirstFunction`. In this folder create two files: `index.js` and `package.json`. For the `package.json` file, for now let's just add the following: ``` javascript { "name": "myFirstFunction", "version": "1.0.0", "dependencies": { "faker" : "latest" } } ``` The `package.json` file is going to allow us to list dependencies for our applications. This is something that we cannot do at the moment in the online editor. The Node.js ecosystem has a plethora of packages that will allow us to easily bring all sorts of functionality to our apps. The current package we defined is called `faker` and is going to allow us to generate fake data. You can learn more about faker on the project's GitHub Page. To install the faker dependency in your `myFirstFunction` folder, run `npm install`. This will download the faker dependency and store it in a `node_modules` folder. We're going to make our AWS Lambda serverless function serve a list of movies. However, since we don't have access to real movie data, this is where faker comes in. We'll use faker to generate data for our function. Open up your `index.js` file and add the following code: ``` javascript const faker = require("faker"); exports.handler = async (event) => { // TODO implement const movie = { title: faker.lorem.words(), plot: faker.lorem.paragraph(), director: `${faker.name.firstName()} ${faker.name.lastName()}`, image: faker.image.abstract(), }; const response = { statusCode: 200, body: JSON.stringify(movie), }; return response; }; ``` With our implementation complete, we're ready to upload this new code to our AWS Lambda serverless function. To do this, we'll first need to zip up the contents within the `myFirstFunction` folder. The way you do this will depend on the operating system you are running. For Mac, you can simply highlight all the items in the `myFirstFunction` folder, right click and select **Compress** from the menu. On Windows, you'll highlight the contents, right click and select **Send to**, and then select **Compressed Folder** to generate a single .zip file. On Linux, you can open a shell in `myFirstFunction` folder and run `zip aws.zip *`. **NOTE: It's very important that you zip up the contents of the folder, not the folder itself. Otherwise, you'll get an error when you upload the file.** Once we have our folder zipped up, it's time to upload it. Navigate to the **Function code** section of your AWS Lambda serverless function and this time, rather than make code changes directly in the editor, click on the **Actions** button in the top right section and select **Upload a .zip file**. Select the compressed file you created and upload it. This may take a few seconds. Once your function is uploaded, you'll likely see a message that says *The deployment package of your Lambda function "myFirstFunction" is too large to enable inline code editing. However, you can still invoke your function.* This is ok. The faker package is large, and we won't be using it for much longer. Let's test it. We'll test it in within the AWS Lambda dashboard by hitting the **Test** button at the top. We are getting a successful response! The text is a bunch of lorem ipsum but that's what we programmed the function to generate. Every time you hit the test button, you'll get a different set of data. ## Getting Up and Running with MongoDB Atlas Generating fake data is fine, but let's step our game up and serve real movie data. For this, we'll need access to a database that has real data we can use. MongoDB Atlas has multiple free datasets that we can utilize and one of them just happens to be a movie dataset. Let's start by setting up our MongoDB Atlas account. If you don't already have one, sign up for one here. >MongoDB Atlas can be used for FREE with a M0 sized cluster. Deploy MongoDB in minutes within the MongoDB Cloud. When you are signed up and logged into the MongoDB Atlas dashboard, the first thing we'll do is set up a new cluster. Click the **Build a Cluster** button to get started. From here, select the **Shared Clusters** option, which will have the free tier we want to use. Finally, for the last selection, you can leave all the defaults as is and just hit the green **Create Cluster** button at the bottom. Depending on your location, you may want to choose a different region, but I'll leave everything as is for the tutorial. The cluster build out will take about a minute to deploy. While we wait for the cluster to be deployed, let's navigate to the **Database Access** tab in the menu and create a new database user. We'll need a database user to be able to connect to our MongoDB database. In the **Database Access** page, click on the **Add New Database User** button and give your user a unique username and password. Be sure to write these down as you'll need them soon enough. Ensure that this database user can read and write to any database by checking the **Database User Privileges** dropdown. It should be selected by default, but if it's not, ensure that it's set to **Read and write to any database**. Next, we'll also want to configure network access by navigating to the **Network Access** tab in the dashboard. For the sake of this tutorial, we'll enable access to our database from any IP as long as the connection has the correct username and password. In a real world scenario, you'll want to limit database access to specific IPs that your application lives on, but configuring that is out of scope for this tutorial. Click on the green **Add IP Address** button, then in the modal that pops up click on **Allow Access From Anywhere**. Click the green **Confirm** button to save the change. By now our cluster should be deployed. Let's hit the **Clusters** selection in the menu and we should see our new cluster created and ready to go. It will look like this: One final thing we'll need to do is add our sample datasets. To do this, click on the **...** button in your cluster and select the **Load Sample Dataset** option. Confirm in the modal that you want to load the data and the sample dataset will be loaded. After the sample dataset is loaded, let's click the **Collections** button in our cluster to see the data. Once the **Collections** tab is loaded, from the databases section, select the **sample_mflix** database, and the **movies** collection within it. You'll see the collection information at the top and the first twenty movies displayed on the right. We have our dataset! Next, let's connect our MongoDB databases that's deployed on MongoDB Atlas to our Serverless AWS Lambda function. ## Connecting MongoDB Atlas to AWS Lambda We have our database deployed and ready to go. All that's left to do is connect the two. On our local machine, let's open up the `package.json` file and add `mongodb` as a dependency. We'll remove `faker` as we'll no longer use it for our movies. ``` javascript { "name": "myFirstFunction", "version": "1.0.0", "dependencies": { "mongodb": "latest" } } ``` Then, let's run `npm install` to install the MongoDB Node.js Driver in our `node_modules` folder. Next, let's open up `index.js` and update our AWS Lambda serverless function. Our code will look like this: ``` javascript // Import the MongoDB driver const MongoClient = require("mongodb").MongoClient; // Define our connection string. Info on where to get this will be described below. In a real world application you'd want to get this string from a key vault like AWS Key Management, but for brevity, we'll hardcode it in our serverless function here. const MONGODB_URI = "mongodb+srv://:@cluster0.cvaeo.mongodb.net/test?retryWrites=true&w=majority"; // Once we connect to the database once, we'll store that connection and reuse it so that we don't have to connect to the database on every request. let cachedDb = null; async function connectToDatabase() { if (cachedDb) { return cachedDb; } // Connect to our MongoDB database hosted on MongoDB Atlas const client = await MongoClient.connect(MONGODB_URI); // Specify which database we want to use const db = await client.db("sample_mflix"); cachedDb = db; return db; } exports.handler = async (event, context) => { /* By default, the callback waits until the runtime event loop is empty before freezing the process and returning the results to the caller. Setting this property to false requests that AWS Lambda freeze the process soon after the callback is invoked, even if there are events in the event loop. AWS Lambda will freeze the process, any state data, and the events in the event loop. Any remaining events in the event loop are processed when the Lambda function is next invoked, if AWS Lambda chooses to use the frozen process. */ context.callbackWaitsForEmptyEventLoop = false; // Get an instance of our database const db = await connectToDatabase(); // Make a MongoDB MQL Query to go into the movies collection and return the first 20 movies. const movies = await db.collection("movies").find({}).limit(20).toArray(); const response = { statusCode: 200, body: JSON.stringify(movies), }; return response; }; ``` The `MONGODB_URI` is your MongoDB Atlas connection string. To get this value, head over to your MongoDB Atlas dashboard. On the Clusters overview page, click on the **Connect** button. From here, select the **Connect your application** option and you'll be taken to a screen that has your connection string. **Note:** Your username will be pre-populated, but you'll have to update the **password** and **dbname** values. Once you've made the above updates to your `index.js` file, save it, and zip up the contents of your `myFirstFunction` folder again. We'll redeploy this code, by going back to our AWS Lambda function and uploading the new zip file. Once it's uploaded, let's test it by hitting the **Test** button at the top right of the page. It works! We get a list of twenty movies from our `sample_mflix` MongoDB database that is deployed on MongoDB Atlas. We can also call our function directly by going to the API Gateway URL from earlier and seeing the results in the browser as well. Navigate to the API Gateway URL you were provided and you should see the same set of results. If you need a refresher on where to find it, navigate to the **Designer** section of your AWS Lambda function, click on **API Gateway**, click the **Details** button to expand all the information, and you'll see an **API Endpoint** URL which is where you can publicly access this serverless function. The query that we have written returns a list of twenty movies from our `sample_mflix.movies` collection. You can modify this query to return different types of data easily. Since this file is much smaller, we're able to directly modify it within the browser using the AWS Lambda online code editor. Let's change our query around so that we get a list of twenty of the highest rated movies and instead of getting back all the data on each movie, we'll just get back the movie title, plot, rating, and cast. Replace the existing query which looks like: ``` javascript const movies = await db.collection("movies").find({}).limit(20).toArray(); ``` To: ``` javascript const movies = await db.collection("movies").find({},{projection: {title: 1, plot: 1, metacritic: 1, cast:1}}).sort({metacritic: -1}).limit(20).toArray() ``` Our results will look slightly different now. The first result we get now is **The Wizard of Oz** which has a Metacritic rating of 100. ## One More Thing... We created our first AWS Lambda serverless function and we made quite a few modifications to it. With each iteration we changed the functionality of what the function is meant to do, but generally we settled on this function retrieving data from our MongoDB database. To close out this article, let's quickly create another serverless function, this one to add data to our movies collection. Since we've already become pros in the earlier section, this should go much faster. ### Creating a Second AWS Lambda Function We'll start by navigating to our AWS Lambda functions homepage. Once here, we'll see our existing function accounted for. Let's hit the orange **Create function** button to create a second AWS Lambda serverless function. We'll leave all the defaults as is, but this time we'll give the function name a more descriptive name. We'll call it **AddMovie**. Once this function is created, to speed things up, we'll actually upload the .zip file from our first function. So hit the **Actions** menu in the **Function Code** section, select **Upload Zip File** and choose the file in your **myFirstFunction** folder. To make sure everything is working ok, let's create a test event and run it. We should get a list of twenty movies. If you get an error, make sure you have the correct username and password in your `MONGODB_URI` connection string. You may notice that the results here will not have **The Wizard of Oz** as the first item. That is to be expected as we made those edits within our `myFirstFunction` online editor. So far, so good. Next, we'll want to capture what data to insert into our MongoDB database. To do this, let's edit our test case. Instead of the default values provided, which we do not use, let's instead create a JSON object that can represent a movie. Now, let's update our serverless function to use this data and store it in our MongoDB Atlas database in the `movies` collection of the `sample_mflix` database. We are going to change our MongoDB `find()` query: ``` javascript const movies = await db.collection("movies").find({}).limit(20).toArray(); ``` To an `insertOne()`: ``` javascript const result = await db.collection("movies").insertOne(event); ``` The complete code implementation is as follows: ``` javascript const MongoClient = require("mongodb").MongoClient; const MONGODB_URI = "mongodb+srv://:@cluster0.cvaeo.mongodb.net/test?retryWrites=true&w=majority"; let cachedDb = null; async function connectToDatabase() { if (cachedDb) { return cachedDb; } const client = await MongoClient.connect(MONGODB_URI); const db = await client.db('sample_mflix'); cachedDb = db; return db } exports.handler = async (event, context) => { context.callbackWaitsForEmptyEventLoop = false; const db = await connectToDatabase(); // Insert the event object, which is the test data we pass in const result = await db.collection("movies").insertOne(event); const response = { statusCode: 200, body: JSON.stringify(result), }; return response; }; ``` To verify that this works, let's test our function. Hitting the test button, we'll get a response that looks like the following image: This tells us that the insert was successful. In a real world application, you probably wouldn't want to send this message to the user, but for our illustrative purposes here, it's ok. We can also confirm that the insert was successful by going into our original function and running it. Since in our test data, we set the metacritic rating to 101, this result should be the first one returned. Let's check. And we're good. Our Avengers movie that we added with our second serverless function is now returned as the first result because it has the highest metacritic rating. ## Putting It All Together We did it! We created our first, and second AWS Lambda serverless functions. We learned how to expose our AWS Lambda serverless functions to the world using AWS API Gateway, and finally we learned how to integrate MongoDB Atlas in our serverless functions. This is just scratching the surface. I made a few call outs throughout the article saying that the reason we're doing things a certain way is for brevity, but if you are building real world applications I want to leave you with a couple of resources and additional reading. - MongoDB Node.js Driver Documentation - MongoDB Best Practices Connecting from AWS Lambda - Setting Up Network Peering - Using AWS Lambda with the AWS CLI - MongoDB University If you have any questions or feedback, join us on the MongoDB Community forums and let's keep the conversation going!
md
{ "tags": [ "Atlas", "JavaScript" ], "pageDescription": "Learn how to write serverless functions with AWS Lambda and MongoDB", "contentType": "Tutorial" }
Write A Serverless Function with AWS Lambda and MongoDB
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/movie-score-prediction-bigquery-vertex-ai-atlas
created
# Movie Score Prediction with BigQuery, Vertex AI, and MongoDB Atlas Hey there! It’s been a minute since we last wrote about Google Cloud and MongoDB Atlas together. We had an idea for this new genre of experiment that involves BigQuery, BQML, Vertex AI, Cloud Functions, MongoDB Atlas, and Cloud Run and we thought of putting it together in this blog. You will get to learn how we brought these services together in delivering a full stack application and other independent functions and services the application uses. Have you read our last blog about Serverless MEAN stack applications with Cloud Run and MongoDB Atlas? If not, this would be a good time to take a look at that, because some topics we cover in this discussion are designed to reference some steps from that blog. In this experiment, we are going to bring BigQuery, Vertex AI, and MongoDB Atlas to predict a categorical variable using a Supervised Machine Learning Model created with AutoML. ## The experiment We all love movies, right? Well, most of us do. Irrespective of language, geography, or culture, we enjoy not only watching movies but also talking about the nuances and qualities that go into making a movie successful. I have often wondered, “If only I could alter a few aspects and create an impactful difference in the outcome in terms of the movie’s rating or success factor.” That would involve predicting the success score of the movie so I can play around with the variables, dialing values up and down to impact the result. That is exactly what we have done in this experiment. ## Summary of architecture Today we'll predict a Movie Score using Vertex AI AutoML and have transactionally stored it in MongoDB Atlas. The model is trained with data stored in BigQuery and registered in Vertex AI. The list of services can be composed into three sections: **1. ML Model Creation 2. User Interface / Client Application 3. Trigger to predict using the ML API** ### ML Model Creation 1. Data sourced from CSV to BigQuery - MongoDB Atlas for storing transactional data and powering the client application - Angular client application interacting with MongoDB Atlas - Client container deployed in Cloud Run 2. BigQuery data integrated into Vertex AI for AutoML model creation - MongoDB Atlas for storing transactional data and powering the client application - Angular client application interacting with MongoDB Atlas - Client container deployed in Cloud Run 3. Model deployed in Vertex AI Model Registry for generating endpoint API - Java Cloud Functions to trigger invocation of the deployed AutoML model’s endpoint that takes in movie details as request from the UI, returns the predicted movie SCORE, and writes the response back to MongoDB ## Preparing training data You can use any publicly available dataset, create your own, or use the dataset from CSV in GitHub. I have done basic processing steps for this experiment in the dataset in the link. Feel free to do an elaborate cleansing and preprocessing for your implementation. Below are the independent variables in the dataset: * Name (String) * Rating (String) * Genre (String, Categorical) * Year (Number) * Released (Date) * Director (String) * Writer (String) * Star (String) * Country (String, Categorical) * Budget (Number) * Company (String) * Runtime (Number) ## BigQuery dataset using Cloud Shell BigQuery is a serverless, multi-cloud data warehouse that can scale from bytes to petabytes with zero operational overhead. This makes it a great choice for storing ML training data. But there’s more — the built-in machine learning (ML) and analytics capabilities allow you to create no-code predictions using just SQL queries. And you can access data from external sources with federated queries, eliminating the need for complicated ETL pipelines. You can read more about everything BigQuery has to offer in the BigQuery product page. BigQuery allows you to focus on analyzing data to find meaningful insights. In this blog, you'll use the **bq** command-line tool to load a local CSV file into a new BigQuery table. Follow the below steps to enable BigQuery: ### Activate Cloud Shell and create your project You will use Cloud Shell, a command-line environment running in Google Cloud. Cloud Shell comes pre-loaded with **bq**. 1. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project. 2. Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project. 3. Enable the BigQuery API and open the BigQuery web UI. 4. From the Cloud Console, click Activate Cloud Shell. Make sure you navigate to the project and that it’s authenticated. Refer to gcloud config commands. ## Creating and loading the dataset A BigQuery dataset is a collection of tables. All tables in a dataset are stored in the same data location. You can also attach custom access controls to limit access to a dataset and its tables. 1. In Cloud Shell, use the `bq mk` command to create a dataset called "movies." ``` bq mk –location=<> movies ``` > Use –location=LOCATION to set the location to a region you can remember to set as the region for the VERTEX AI step as well (both instances should be on the same region). 2. Make sure you have the data file (.csv) ready. The file can be downloaded from GitHub. Execute the following commands in Cloud Shell to clone the repository and navigate to the project: ``` git clone https://github.com/AbiramiSukumaran/movie-score.git cd movie-score ``` *You may also use a public dataset of your choice. To open and query the public dataset, follow the documentation.* 3. Use the `bq load` command to load your CSV file into a BigQuery table (please note that you can also directly upload from the BigQuery UI): ``` bq load --source_format=CSV --skip_leading_rows=1 movies.movies_score \ ./movies_bq_src.csv \ Id:numeric,name:string,rating:string,genre:string,year:numeric,released:string,score:string,director:string,writer:string,star:string,country:string,budget:numeric,company:string,runtime:numeric,data_cat:string ``` - `--source_format=CSV` — uses CSV data format when parsing data file. - `--skip_leading_rows=1` — skips the first line in the CSV file because it is a header row. - `movies.movies_score` — defines the table the data should be loaded into. - `./movies_bq_src.csv` — defines the file to load. The `bq load` command can load files from Cloud Storage with gs://my_bucket/path/to/file URIs. A schema, which can be defined in a JSON schema file or as a comma-separated list. (I’ve used a comma-separated list.) Hurray! Our CSV data is now loaded in the table `movies.movies`. Remember, you can create a view to keep only essential columns that contribute to the model training and ignore the rest. 4. Let’s query it, quick! We can interact with BigQuery in three ways: 1. BigQuery web UI 2. The bq command 3. API Your queries can also join your data against any dataset (or datasets, so long as they're in the same location) that you have permission to read. Find a snippet of the sample data below: ```sql SELECT name, rating, genre, runtime FROM movies.movies_score limit 3; ``` I have used the BigQuery Web SQL Workspace to run queries. The SQL Workspace looks like this: ## Predicting movie success score (user score on a scale of 1-10) In this experiment, I am predicting the success score (user score/rating) for the movie as a multi-class classification model on the movie dataset. **A quick note about the choice of model** This is an experimental choice of model chosen here, based on the evaluation of results I ran across a few models initially and finally went ahead with LOGISTIC REG to keep it simple and to get results closer to the actual movie rating from several databases. Please note that this should be considered just as a sample for implementing the model and is definitely not the recommended model for this use case. One other way of implementing this is to predict the outcome of the movie as GOOD/BAD using the Logistic Regression model instead of predicting the score. ## Using BigQuery data in Vertex AI AutoML integration Use your data from BigQuery to directly create an AutoML model with Vertex AI. Remember, we can also perform AutoML from BigQuery itself and register the model with VertexAI and expose the endpoint. Refer to the documentation for BigQuery AutoML. In this example, however, we will use Vertex AI AutoML to create our model. ### Creating a Vertex AI data set Go to Vertex AI from Google Cloud Console, enable Vertex AI API if not already done, expand data and select Datasets, click on Create data set, select TABULAR data type and the “Regression / classification” option, and click Create: ### Select data source On the next page, select a data source: Choose the “Select a table or view from BigQuery” option and select the table from BigQuery in the BigQuery path BROWSE field. Click Continue. **A Note to remember** The BigQuery instance and Vertex AI data sets should have the same region in order for the BigQuery table to show up in Vertex AI. When you are selecting your source table/view, from the browse list, remember to click on the radio button to continue with the below steps. If you accidentally click on the name of the table/view, you will be taken to Dataplex. You just need to browse back to Vertex AI if this happens to you. ### Train your model Once the dataset is created, you should see the Analyze page with the option to train a new model. Click that: ### Configure training steps Go through the steps in the Training Process. Leave Objective as **Classification**. Select AutoML option in first page and click continue: Give your model a name. Select Target Column name as “Score” from the dropdown that shows and click Continue. Also note that you can check the “Export test dataset to BigQuery” option, which makes it easy to see the test set with results in the database efficiently without an extra integration layer or having to move data between services. On the next pages, you have the option to select any advanced training options you need and the hours you want to set the model to train. Please note that you might want to be mindful of the pricing before you increase the number of node hours you want to use for training. Click **Start Training** to begin training your new model. ### Evaluate, deploy, and test your model Once the training is completed, you should be able to click Training (under the Model Development heading in the left-side menu) and see your training listed in the Training Pipelines section. Click that to land on the Model Registry page. You should be able to: 1. View and evaluate the training results. 1. Deploy and test the model with your API endpoint. Once you deploy your model, an API endpoint gets created which can be used in your application to send requests and get model prediction results in the response. 1. Batch predict movie scores. You can integrate batch predictions with BigQuery database objects as well. Read from the BigQuery object (in this case, I have created a view to batch predict movies score) and write into a new BigQuery table. Provide the respective BigQuery paths as shown in the image and click CREATE: Once it is complete, you should be able to query your database for the batch prediction results. But before you move on from this section, make sure you take a note of the deployed model’s Endpoint id, location, and other details on your Vertex AI endpoint section. We have created a custom ML model for the same use case using BigQuery ML with no code but only SQL, and it’s already detailed in another blog. ## Serverless web application with MongoDB Atlas and Angular The user interface for this experiment is using Angular and MongoDB Atlas and is deployed on Cloud Run. Check out the blog post describing how to set up a MongoDB serverless instance to use in a web app and deploy that on Cloud Run. In the application, we’re also utilizing Atlas Search, a full-text search capability, integrated into MongoDB Atlas. Atlas Search enables autocomplete when entering information about our movies. For the data, we imported the same dataset we used earlier into Atlas. You can find the source code of the application in the dedicated Github repository. ## MongoDB Atlas for transactional data In this experiment, MongoDB Atlas is used to record transactions in the form of: 1. Real time user requests. 1. Prediction result response. 1. Historical data to facilitate UI fields autocompletion. If instead, you want to configure a pipeline for streaming data from MongoDB to BigQuery and vice-versa, check out the dedicated Dataflow templates. Once you provision your cluster and set up your database, make sure to note the below in preparation of our next step, creating the trigger: 1. Connection String 1. Database Name 1. Collection Name Please note that this client application uses the Cloud Function Endpoint (which is explained in the below section) that uses user input and predicts movie score and inserts in MongoDB. ## Java Cloud Function to trigger ML invocation from the UI Cloud Functions is a lightweight, serverless compute solution for developers to create single-purpose, stand-alone functions that respond to Cloud events without needing to manage a server or runtime environment. In this section, we will prepare the Java Cloud Functions code and dependencies and authorize for it to be executed on triggers Remember how we have the endpoint and other details from the ML deployment step? We are going to use that here, and since we are using Java Cloud Functions, we will use pom.xml for handling dependencies. We use google-cloud-aiplatform library to consume the Vertex AI AutoML endpoint API: ```xml com.google.cloud google-cloud-aiplatform 3.1.0 ``` 1. Search for Cloud Functions in Google Cloud console and click “Create Function.” 2. Enter the configuration details, like Environment, Function name, Region, Trigger (in this case, HTTPS), Authentication of your choice, enable “Require HTTPS,” and click next/save. 3. On the next page, select Runtime (Java 11), Source Code (Inline or upload), and start editing 4. You can clone the Java source code and pom.xml from the GitHub repository. > If you are using Gen2 (recommended), you can use the class name and package as-is. If you use Gen1 Cloud Functions, please change the package name and class name to “Example.” 5. In the .java file, you will notice the part where we connect to MongoDB instance to write data: (use your credentials) ```java MongoClient client = MongoClients.create(YOUR_CONNECTION_STRING); MongoDatabase database = client.getDatabase("movies"); MongoCollection collection = database.getCollection("movies"); ``` 6. You should also notice the ML model invocation part in the java code (use your endpoint): ```java PredictionServiceSettings predictionServiceSettings = PredictionServiceSettings.newBuilder().setEndpoint("<>-aiplatform.googleapis.com:443") .build(); int cls = 0; … EndpointName endpointName = EndpointName.of(project, location, endpointId); ``` 7. Go ahead and deploy the function once all changes are completed. You should see the endpoint URL that will be used in the client application to send requests to this Cloud Function. That’s it! Nothing else to do in this section. The endpoint is used in the client application for the user interface to send user parameters to Cloud Functions as a request and receive movie score as a response. The endpoint also writes the response and request to the MongoDB collection. ## What’s next? Thank you for following us on this journey! As a reward for your patience, you can check out the predicted score for your favorite movie. 1. Analyze and compare the accuracy and other evaluation parameters between the BigQuery ML manually using SQLs and Vertex AI Auto ML model. 1. Play around with the independent variables and try to increase the accuracy of the prediction result. 1. Take it one step further and try the same problem as a Linear Regression model by predicting the score as a float/decimal point value instead of rounded integers. To learn more about some of the key concepts in this post you can dive in here: Linear Regression Tutorial AutoML Model Types Codelabs
md
{ "tags": [ "Atlas", "Google Cloud", "AI" ], "pageDescription": "We're using BigQuery, Vertex AI, and MongoDB Atlas to predict a categorical variable using a Supervised Machine Learning Model created with AutoML.", "contentType": "Tutorial" }
Movie Score Prediction with BigQuery, Vertex AI, and MongoDB Atlas
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-massive-arrays
created
# Massive Arrays Design patterns are a fundamental part of software engineering. They provide developers with best practices and a common language as they architect applications. At MongoDB, we have schema design patterns to help developers be successful as they plan and iterate on their schema designs. Daniel Coupal and Ken Alger co-wrote a fantastic blog series that highlights each of the schema design patterns. If you really want to dive into the details (and I recommend you do!), check out MongoDB University's free course on Data Modeling. Sometimes, developers jump right into designing their schemas and building their apps without thinking about best practices. As their apps begin to scale, they realize that things are bad. We've identified several common mistakes developers make with MongoDB. We call these mistakes "schema design anti-patterns." Throughout this blog series, I'll introduce you to six common anti-patterns. Let's start today with the Massive Arrays anti-pattern. > > >:youtube]{vid=8CZs-0it9r4 start=236} > >Prefer to learn by video? I've got you covered. > > ## Massive Arrays One of the rules of thumb when modeling data in MongoDB is *data that is accessed together should be stored together*. If you'll be retrieving or updating data together frequently, you should probably store it together. Data is commonly stored together by embedding related information in subdocuments or arrays. The problem is that sometimes developers take this too far and embed massive amounts of information in a single document. Consider an example where we store information about employees who work in various government buildings. If we were to embed the employees in the building document, we might store our data in a buildings collection like the following: ``` javascript // buildings collection { "_id": "city_hall", "name": "City Hall", "city": "Pawnee", "state": "IN", "employees": [ { "_id": 123456789, "first": "Leslie", "last": "Yepp", "cell": "8125552344", "start-year": "2004" }, { "_id": 234567890, "first": "Ron", "last": "Swandaughter", "cell": "8125559347", "start-year": "2002" } ] } ``` In this example, the employees array is unbounded. As we begin storing information about all of the employees who work in City Hall, the employees array will become massive—potentially sending us over the [16 mb document maximum. Additionally, reading and building indexes on arrays gradually becomes less performant as array size increases. The example above is an example of the massive arrays anti-pattern. So how can we fix this? Instead of embedding the employees in the buildings documents, we could flip the model and instead embed the buildings in the employees documents: ``` javascript // employees collection { "_id": 123456789, "first": "Leslie", "last": "Yepp", "cell": "8125552344", "start-year": "2004", "building": { "_id": "city_hall", "name": "City Hall", "city": "Pawnee", "state": "IN" } }, { "_id": 234567890, "first": "Ron", "last": "Swandaughter", "cell": "8125559347", "start-year": "2002", "building": { "_id": "city_hall", "name": "City Hall", "city": "Pawnee", "state": "IN" } } ``` In the example above, we are repeating the information about City Hall in the document for each City Hall employee. If we are frequently displaying information about an employee and their building in our application together, this model probably makes sense. The disadvantage with this approach is we have a lot of data duplication. Storage is cheap, so data duplication isn't necessarily a problem from a storage cost perspective. However, every time we need to update information about City Hall, we'll need to update the document for every employee who works there. If we take a look at the information we're currently storing about the buildings, updates will likely be very infrequent, so this approach may be a good one. If our use case does not call for information about employees and their building to be displayed or updated together, we may want to instead separate the information into two collections and use references to link them: ``` javascript // buildings collection { "_id": "city_hall", "name": "City Hall", "city": "Pawnee", "state": "IN" } // employees collection { "_id": 123456789, "first": "Leslie", "last": "Yepp", "cell": "8125552344", "start-year": "2004", "building_id": "city_hall" }, { "_id": 234567890, "first": "Ron", "last": "Swandaughter", "cell": "8125559347", "start-year": "2002", "building_id": "city_hall" } ``` Here we have completely separated our data. We have eliminated massive arrays, and we have no data duplication. The drawback is that if we need to retrieve information about an employee and their building together, we'll need to use $lookup to join the data together. $lookup operations can be expensive, so it's important to consider how often you'll need to perform $lookup if you choose this option. If we find ourselves frequently using $lookup, another option is to use the extended reference pattern. The extended reference pattern is a mixture of the previous two approaches where we duplicate some—but not all—of the data in the two collections. We only duplicate the data that is frequently accessed together. For example, if our application has a user profile page that displays information about the user as well as the name of the building and the state where they work, we may want to embed the building name and state fields in the employee document: ``` javascript // buildings collection { "_id": "city_hall", "name": "City Hall", "city": "Pawnee", "state": "IN" } // employees collection { "_id": 123456789, "first": "Leslie", "last": "Yepp", "cell": "8125552344", "start-year": "2004", "building": { "name": "City Hall", "state": "IN" } }, { "_id": 234567890, "first": "Ron", "last": "Swandaughter", "cell": "8125559347", "start-year": "2002", "building": { "name": "City Hall", "state": "IN" } } ``` As we saw when we duplicated data previously, we should be mindful of duplicating data that will frequently be updated. In this particular case, the name of the building and the state the building is in are very unlikely to change, so this solution works. ## Summary Storing related information that you'll be frequently querying together is generally good. However, storing information in massive arrays that will continue to grow over time is generally bad. As is true with all MongoDB schema design patterns and anti-patterns, carefully consider your use case—the data you will store and how you will query it—in order to determine what schema design is best for you. Be on the lookout for more posts in this anti-patterns series in the coming weeks. > > >When you're ready to build a schema in MongoDB, check out MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB. With a forever-free tier, you're on your way to realizing the full value of MongoDB. > > ## Related Links Check out the following resources for more information: - MongoDB Docs: Unbounded Arrays Anti-Pattern - MongoDB Docs: Data Modeling Introduction - MongoDB Docs: Model One-to-One Relationships with Embedded Documents - MongoDB Docs: Model One-to-Many Relationships with Embedded Documents - MongoDB Docs: Model One-to-Many Relationships with Document References - MongoDB University M320: Data Modeling - Blog Series: Building with Patterns
md
{ "tags": [ "MongoDB" ], "pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Massive Arrays", "contentType": "Article" }
Massive Arrays
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-unnecessary-indexes
created
# Unnecessary Indexes So far in this MongoDB Schema Design Anti-Patterns series, we've discussed avoiding massive arrays as well as a massive number of collections. Today, let's talk about indexes. Indexes are great (seriously!), but it's easy to get carried away and make indexes that you'll never actually use. Let's examine why an index may be unnecessary and what the consequences of keeping it around are. > > >:youtube]{vid=mHeP5IbozDU start=32} > >Would you rather watch than read? The video above is just for you. > > ## Unnecessary Indexes Before we go any further, we want to emphasize that [indexes are good. Indexes allow MongoDB to efficiently query data. If a query does not have an index to support it, MongoDB performs a collection scan, meaning that it scans *every* document in a collection. Collection scans can be very slow. If you frequently execute a query, make sure you have an index to support it. Now that we have an understanding that indexes are good, you might be wondering, "Why are unnecessary indexes an anti-pattern? Why not create an index on every field just in case I'll need it in the future?" We've discovered three big reasons why you should remove unnecessary indexes: 1. **Indexes take up space**. Each index is at least 8 kB and grows with the number of documents associated with it. Thousands of indexes can begin to drain resources. 2. **Indexes can impact the storage engine's performance**. As we discussed in the previous post in this series about the Massive Number of Collections Anti-Pattern, the WiredTiger storage engine (MongoDB's default storage engine) stores a file for each collection and for each index. WiredTiger will open all files upon startup, so performance will decrease when an excessive number of collections and indexes exist. 3. **Indexes can impact write performance**. Whenever a document is created, updated, or deleted, any index associated with that document must also be updated. These index updates negatively impact write performance. In general, we recommend limiting your collection to a maximum of 50 indexes. To avoid the anti-pattern of unnecessary indexes, examine your database and identify which indexes are truly necessary. Unnecessary indexes typically fall into one of two categories: 1. The index is rarely used or not at all. 2. The index is redundant because another compound index covers it. ## Example Consider Leslie from the incredible TV show Parks and Recreation. Leslie often looks to other powerful women for inspiration. Let's say Leslie wants to inspire others, so she creates a website about her favorite inspirational women. The website allows users to search by full name, last name, or hobby. Leslie chooses to use MongoDB Atlas to create her database. She creates a collection named `InspirationalWomen`. Inside of that collection, she creates a document for each inspirational woman. Below is a document she created for Sally Ride. ``` javascript // InspirationalWomen collection { "_id": { "$oid": "5ec81cc5b3443e0e72314946" }, "first_name": "Sally", "last_name": "Ride", "birthday": 1951-05-26T00:00:00.000Z, "occupation": "Astronaut", "quote": "I would like to be remembered as someone who was not afraid to do what she wanted to do, and as someone who took risks along the way in order to achieve her goals.", "hobbies": "Tennis", "Writing children's books" ] } ``` Leslie eats several sugar-filled Nutriyum bars, and, riding her sugar high, creates an index for every field in her collection. She also creates a compound index on the last_name and first_name fields, so that users can search by full name. Leslie now has one collection with eight indexes: 1. `_id` is indexed by default (see the [MongoDB Docs for more details) 2. `{ first_name: 1 }` 3. `{ last_name: 1 }` 4. `{ birthday: 1 }` 5. `{ occupation: 1 }` 6. `{ quote: 1 }` 7. `{ hobbies: 1 }` 8. `{ last_name: 1, first_name: 1}` Leslie launches her website and is excited to be helping others find inspiration. Users are discovering new role models as they search by full name, last name, and hobby. ### Removing Unnecessary Indexes Leslie decides to fine-tune her database and wonders if all of those indexes she created are really necessary. She opens the Atlas Data Explorer and navigates to the Indexes pane. She can see that the only two indexes that are being used are the compound index named `last_name_1_first_name_1` and the `hobbies_1` index. She realizes that this makes sense. Her queries for inspirational women by full name are covered by the `last_name_1_first_name_1` index. Additionally, her query for inspirational women by last name is covered by the same `last_name_1_first_name_1` compound index since the index has a `last_name` prefix. Her queries for inspirational women by hobby are covered by the `hobbies_1` index. Since those are the only ways that users can query her data, the other indexes are unnecessary. In the Data Explorer, Leslie has the option of dropping all of the other unnecessary indexes. Since MongoDB requires an index on the `_id` field, she cannot drop this index. In addition to using the Data Explorer, Leslie also has the option of using MongoDB Compass to check for unnecessary indexes. When she navigates to the Indexes pane for her collection, she can once again see that the `last_name_1_first_name_1` and the `hobbies_1` indexes are the only indexes being used regularly. Just as she could in the Atlas Data Explorer, Leslie has the option of dropping each of the indexes except for `_id`. Leslie decides to drop all of the unnecessary indexes. After doing so, her collection now has the following indexes: 1. `_id` is indexed by default 2. `{ hobbies: 1 }` 3. `{ last_name: 1, first_name: 1}` ## Summary Creating indexes that support your queries is good. Creating unnecessary indexes is generally bad. Unnecessary indexes reduce performance and take up space. An index is considered to be unnecessary if (1) it is not frequently used by a query or (2) it is redundant because another compound index covers it. You can use the Atlas Data Explorer or MongoDB Compass to help you discover how frequently your indexes are being used. When you discover an index is unnecessary, remove it. Be on the lookout for the next post in this anti-patterns series! ## Related Links Check out the following resources for more information: - MongoDB Docs: Remove Unnecessary Indexes - MongoDB Docs: Indexes - MongoDB Docs: Compound Indexes — Prefixes - MongoDB Docs: Indexing Strategies - MongoDB Docs: Data Modeling Introduction - MongoDB University M320: Data Modeling - MongoDB University M201: MongoDB Performance - Blog Series: Building with Patterns
md
{ "tags": [ "MongoDB" ], "pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Unnecessary Indexes", "contentType": "Article" }
Unnecessary Indexes
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/javascript/restapi-mongodb-code-example
created
# Final Space API ## Creator Ashutosh Kumar Singh contributed this project. ## About the Project The Final Space API is based on the television show Final Space by Olan Rogers from TBS. From talking cats to evil aliens, the animated show tells the intergalactic adventures of Gary Goodspeed and his alien friend Mooncake as they unravel the mystery of "Final Space". The show can be viewed, amongst other places, on TBS, AdultSwim, and Netflix. All data of this API, such as character info, is obtained from the Final Space wiki. More data such as season and episode information is planned for future release. This data can be used for your own projects such as fan pages or any way you see fit. All this information is available through a RESTful API implemented in NodeJS. This API returns data in a friendly json format. The Final Space API is maintained as an open source project on GitHub. More information about contributing can be found in the readme. ## Inspiration During Hacktoberfest 2020, I want to create and maintain a project and not just contribute during the hacktoberfest. Final Space is one of my favorite animated television shows. I took inspiration from Rick & Morty API and tried to build the MVP of the API. The project saw huge cntributions from developers all around the world and finished the version 1 of the API by the end of October. ## Why MongoDB? I wanted that data should be accessed quickly and can be easily maintained. MongoDB was my obvious choice, the free cluster is more than enough for all my needs. I believe I can increase the data hundred times and still find that Free Cluster is meeting my needs. ## How It Works You can fetch the data by making a POST request to any of the endpoints. There are four available resources: Character: used to get all the characters. https://finalspaceapi.com/api/v0/character Episode: used to get all the episodes. https://finalspaceapi.com/api/v0/episode Location: used to get all the locations. https://finalspaceapi.com/api/v0/location Quote: used to get quotes from Final Space. https://finalspaceapi.com/api/v0/quote
md
{ "tags": [ "JavaScript", "Atlas" ], "pageDescription": "Final Space API is a public RESTful API based on the animated television show Final Space.", "contentType": "Code Example" }
Final Space API
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/realm/realm-flexible-sync
created
# Introducing Flexible Sync (Preview) – The Next Iteration of Realm Sync Today, we are excited to announce the public preview of our next version of Realm Sync: Flexible Sync. This new method of syncing puts the power into the hands of the developer. Now, developers can get more granular control over the data synced to user applications with intuitive language-native queries and hierarchical permissions. :youtube]{vid=aJ6TI1mc7Bs} ## Introduction Prior to launching the general availability of Realm Sync in February 2021, the Realm team spent countless hours with developers learning how they build best-in-class mobile applications. A common theme emerged—building real-time, offline-first mobile apps require an overwhelming amount of complex, non-differentiating work. Our [first version of Realm Sync addressed this pain by abstracting away offline-first, real-time syncing functionality using declarative APIs. It expedited the time-to-market for many developers and worked well for apps where data is static and compartmentalized, or where permissions rarely need to change. But for dynamic apps and complex use cases, developers still had to spend time creating workarounds instead of developing new features. With that in mind, we built the next iteration of Realm Sync: Flexible Sync. Flexible Sync is designed to help developers: - Get to market faster: Use intuitive, language-native queries to define the data synced to user applications instead of proprietary concepts. - Optimize real-time collaboration between users: Utilize object-level conflict-resolution logic. - Simplify permissions: Apply role-based logic to applications with an expressive permissions system that groups users into roles on a pe-class or collection basis. Flexible Sync requires MongoDB 5.0+. > **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free! ## Language-Native Querying Flexible Sync’s query-based sync logic is distinctly different from how Realm Sync operates today. The new structure is designed to more closely mirror how developers are used to building sync today—typically using GET requests with query parameters. One of the primary benefits of Flexible Sync is that it eliminates all the time developers spend determining what query parameters to pass to an endpoint. Instead, the Realm APIs directly integrate with the native querying system on the developer’s choice of platform—for example, a predicate-based query language for iOS, a Fluent query for Android, a string-based query for Javascript, and a LINQ query for .NET. Under the hood, the Realm Sync thread sends the query to MongoDB Realm (Realm’s cloud offering). MongoDB Realm translates the query to MongoDB’s query language and executes the query against MongoDB Atlas. Atlas then returns the resulting documents. Those documents are then translated into Realm objects, sent down to the Realm client, and stored on disk. The Realm Sync thread keeps a queue of any changes made locally to synced objects—even when offline. As soon as connectivity is reestablished, any changes made to the server-side or client-side are synced down using built-in granular conflict resolution logic. All of this occurs behind the scenes while the developer is interacting with the data. This is the part we’ve heard our users describe as “magic.” Flexible Sync also enables much more dynamic queries, based on user inputs. Picture a home listing app that allows users to search available properties in a certain area. As users define inputs—only show houses in Dallas, TX that cost less than $300k and have at least three bedrooms—the query parameters can be combined with logical ANDs and ORs to produce increasingly complex queries, and narrow down the search result even further. All query results are combined into a single realm file on the client’s device, which significantly simplifies code required on the client-side and ensures changes to data are synced efficiently and in real time. ::::tabs :::tab]{tabid="Swift"} ```swift // Set your Schema class Listing: Object { @Persisted(primaryKey: true) var _id: ObjectId @Persisted var location: String @Persisted var price: Int @Persisted var bedrooms: Int } // Configure your App and login let app = App(id: "XXXX") let user = try! await app.login(credentials: .emailPassword(email: "email", password: "password")) // Set the new Flexible Sync Config and open the Realm let config = user.flexibleSyncConfiguration() let realm = try! await Realm(configuration: config, downloadBeforeOpen: .always) // Create a Query and Add it to your Subscriptions let subscriptions = realm.subscriptions try! await subscriptions.write { subscriptions.append(QuerySubscription(name: "home-search") { $0.location == "dallas" && $0.price < 300000 && $0.bedrooms >= 3 }) } // Now query the local realm and get your home listings - output is 100 listings // in the results print(realm.objects(Listing.self).count) // Remove the subscription - the data is removed from the local device but stays // on the server try! await subscriptions.write { subscriptions.remove(named: "home-search") } // Output is 0 - listings have been removed locally print(realm.objects(Listing.self).count) ``` ::: :::tab[]{tabid="Kotlin"} ```kotlin // Set your Schema open class Listing: ObjectRealm() { @PrimaryKey @RealmField("_id") var id: ObjectId var location: String = "" var price: Int = 0 var bedrooms: Int = 0 } // Configure your App and login val app = App("") val user = app.login(Credentials.emailPassword("email", "password")) // Set the new Flexible Sync Config and open the Realm let config = SyncConfiguration.defaultConfig(user) let realm = Realm.getInstance(config) // Create a Query and Add it to your Subscriptions val subscriptions = realm.subscriptions subscriptions.update { mutableSubscriptions -> val sub = Subscription.create( "home-search", realm.where() .equalTo("location", "dallas") .lessThan("price", 300_000) .greaterThanOrEqual("bedrooms", 3) ) mutableSubscriptions.add(subscription) } // Wait for server to accept the new subscription and download data subscriptions.waitForSynchronization() realm.refresh() // Now query the local realm and get your home listings - output is 100 listings // in the results val homes = realm.where().count() // Remove the subscription - the data is removed from the local device but stays // on the server subscriptions.update { mutableSubscriptions -> mutableSubscriptions.remove("home-search") } subscriptions.waitForSynchronization() realm.refresh() // Output is 0 - listings have been removed locally val homes = realm.where().count() ``` ::: :::tab[]{tabid=".NET"} ```csharp // Set your Schema class Listing: RealmObject { [PrimaryKey, MapTo("_id")] public ObjectId Id { get; set; } public string Location { get; set; } public int Price { get; set; } public int Bedrooms { get; set; } } // Configure your App and login var app = App.Create(YOUR_APP_ID_HERE); var user = await app.LogInAsync(Credentials.EmailPassword("email", "password")); // Set the new Flexible Sync Config and open the Realm var config = new FlexibleSyncConfiguration(user); var realm = await Realm.GetInstanceAsync(config); // Create a Query and Add it to your Subscriptions var dallasQuery = realm.All().Where(l => l.Location == "dallas" && l.Price < 300_000 && l.Bedrooms >= 3); realm.Subscriptions.Update(() => { realm.Subscriptions.Add(dallasQuery); }); await realm.Subscriptions.WaitForSynchronizationAsync(); // Now query the local realm and get your home listings - output is 100 listings // in the results var numberOfListings = realm.All().Count(); // Remove the subscription - the data is removed from the local device but stays // on the server realm.Subscriptions.Update(() => { realm.Subscriptions.Remove(dallasQuery); }); await realm.Subscriptions.WaitForSynchronizationAsync(); // Output is 0 - listings have been removed locally numberOfListings = realm.All().Count(); ``` ::: :::tab[]{tabid="JavaScript"} ```js import Realm from "realm"; // Set your Schema const ListingSchema = { name: "Listing", primaryKey: "_id", properties: { _id: "objectId", location: "string", price: "int", bedrooms: "int", }, }; // Configure your App and login const app = new Realm.App({ id: YOUR_APP_ID_HERE }); const credentials = Realm.Credentials.emailPassword("email", "password"); const user = await app.logIn(credentials); // Set the new Flexible Sync Config and open the Realm const realm = await Realm.open({ schema: [ListingSchema], sync: { user, flexible: true }, }); // Create a Query and Add it to your Subscriptions await realm.subscriptions.update((mutableSubscriptions) => { mutableSubscriptions.add( realm .objects(ListingSchema.name) .filtered("location = 'dallas' && price < 300000 && bedrooms = 3", { name: "home-search", }) ); }); // Now query the local realm and get your home listings - output is 100 listings // in the results let homes = realm.objects(ListingSchema.name).length; // Remove the subscription - the data is removed from the local device but stays // on the server await realm.subscriptions.update((mutableSubscriptions) => { mutableSubscriptions.removeByName("home-search"); }); // Output is 0 - listings have been removed locally homes = realm.objects(ListingSchema.name).length; ``` ::: :::: ## Optimizing for Real-Time Collaboration Flexible Sync also enhances query performance and optimizes for real-time user collaboration by treating a single object or document as the smallest entity for synchronization. Flexible Sync allows for Sync Realms to more efficiently share data and for conflict resolution to incorporate changes faster and with less data transfer. For example, you and a fellow employee are analyzing the remaining tasks for a week. Your coworker wants to see all of the time-intensive tasks remaining (`workunits > 5`), and you want to see all the tasks you have left for the week (`owner == ianward`). Your queries will overlap where `workunits > 5` and `owner == ianward`. If your coworker notices one of your tasks is marked incorrectly as `7 workunits` and changes the value to `6`, you will see the change reflected on your device in real time. Under the hood, the merge algorithm will only sync the changed document instead of the entire set of query results increasing query performance. ![Venn diagram showing that 2 different queries can share some of the same documents ## Permissions Whether it’s a company’s internal application or an app on the App Store, permissions are required in almost every application. That’s why we are excited by how seamless Flexible Sync makes applying a document-level permission model when syncing data—meaning synced documents can be limited based on a user’s role. Consider how a sales organization uses a CRM application. An individual sales representative should only be able to access her own sales pipeline while her manager needs to be able to see the entire region’s sales pipeline. In Flexible Sync, a user’s role will be combined with the client-side query to determine the appropriate result set. For example, when the sales representative above wants to view her deals, she would send a query where `opportunities.owner == "EmmaLullo"` but when her manager wants to see all the opportunities for their entire team, they would query with opportunities.team == "West”. If a user sends a much more expansive query, such as querying for all opportunities, then the permissions system would only allow data to be synced for which the user had explicit access. ```json { "Opportunities": { "roles": { name: "manager", applyWhen: { "%%user.custom_data.isSalesManager": true}, read: {"team": "%%user.custom_data.teamManager"} write: {"team": "%%user.custom_data.teamManager"} }, { name: "salesperson", applyWhen: {}, read: {"owner": "%%user.id"} write: {"owner": "%%user.id"} } ] }, { "Bookings": { "roles": [ { name: "accounting", applyWhen: { "%%user.custom_data.isAccounting": true}, read: true, write: true }, { name: "sales", applyWhen: {}, read: {"%%user.custom_data.isSales": true}, write: false } ] } ``` ## Looking Ahead Ultimately, our goal with Flexible Sync is to deliver a sync service that can fit any use case or schema design pattern imaginable without custom code or workarounds. And while we are excited that Flexible Sync is now in preview, we’re nowhere near done. The Realm Sync team is planning to bring you more query operators and permissions integrations over the course of 2022. Up next we are looking to expose array operators and enable querying on embedded documents, but really, we look to you, our users, to help us drive the roadmap. Submit your ideas and feature requests to our [feedback portal and ask questions in our Community forum. Happy building!
md
{ "tags": [ "Realm" ], "pageDescription": "Realm Flexible Sync (now in preview) gives developers new options for syncing data to your apps", "contentType": "News & Announcements" }
Introducing Flexible Sync (Preview) – The Next Iteration of Realm Sync
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/kotlin/realm-google-authentication-android
created
# Start Implementing Google Auth With MongoDB Realm in Your Android App Hello, everyone. I am Henna. I started with Mobile Application back in 2017 when I was a lucky recipient of the Udacity Scholarship. I had always used SQLite when it came to using databases in my mobile apps. Using SQLite was definitely a lot of boilerplate code, but using it with Room library did make it easier. I had heard about Realm before but I got so comfortable using Room with SQLite that I never thought of exploring the option. At the time, I was not aware that Realm had multiple offerings, from being used as a local database on mobile, to offering Sync features to be able to sync your app data to multiple devices. I will pen down my experiments with MongoDB Realm as a series of articles. This is the first article in the series and it is divided into two parts. **Part A** will explain how to create a MongoDB Realm back end for your mobile app. **Part B** will explain how to implement Google Authentication in the app. >Pre-Requisites: You have created at least one app using Android Studio. Photo by Emily Finch on Unsplash Let's get some coffee and get the ball rolling. :) ## Part A: ### Step 1. How to Create an Account on MongoDB Cloud MongoDB Realm is a back end as a service. When you want to use MongoDB Realm Sync functionality, you need to create a MongoDB Realm account and it is free :D :D > MongoDB’s Atlas offering of the database as a service is what makes this database so amazing. For mobile applications, we use Realm DB locally on the mobile device, and the local data gets synced to MongoDB Atlas on the cloud. An account on MongoDB Cloud can be easily created by visiting . Once you sign-in to your account, you will be asked to create an Organization Once you click on the Create button, you will be asked to enter organization name and select MongoDB Atlas as a Cloud Service as shown below and click Next. Add members and permissions as desired and click on Create Organization. Since I am working on my own I added only myself as Project Owner. Next you will be asked to create a project, name it and add members and permissions. Each permission is described on the right side. Be cautious of whom you give read/write access of your database. Once you create a project, you will be asked to deploy your database as shown below Depending on your use-case, you can select from given options. For this article, I will choose shared and Free service :) Next select advance configuration options and you will be asked to select a Cloud Provider and Region A cluster is a group of MongoDB Servers that store your data on the cloud. Depending on your app requirement, you choose one. I opted for a free cluster option for this app. > Be mindful of the Cloud Provider and the Location you choose. Realm App is currently available only with AWS and it is recommended to have Realm App region closer to the cluster region and same Cloud Provider. So I choose the settings as shown. > Give a name to your cluster. Please note this cannot be changed later. And with this, you are all set with Step 1. ### Step 2. Security Quickstart Once you have created your cluster, you will be asked to create a user to access data stored in Atlas. This used to be a manual step earlier but now you get the option to add details as and when you create a cluster. These credentials can be used to connect to your cluster via MongoDB Compass or Mongo Shell but we will come to that later. You can click on “Add My Current IP Address” to whitelist your IP address and follow as instructed. If you need to change your settings at a later time, use Datasbase Access and Network Access from Security section that will appear on left panel. With this Step 2 is done. ### Step 3. How to Create a Realm App on the Cloud We have set up our cluster, so the next step is to create a Realm app and link it to it. Click on the Realm tab as shown. You will be shown a template window that you can choose from. For this article, I will select “Build your own App” Next you will be asked to fill details as shown. Your Data Source is the Atlas Cluster you created in Step1. If you have multiple clusters, select the one you want to link your app with. Please note, Realm app names should have fewer than 64 characters. For better performance, It is recommended to have local deployment and region the same or closer to your cluster region. Check the Global Deployment section in MongoDB's official documentation for more details. You will be shown a section of guides once you click on “Create a Realm Application”. You can choose to follow the guides if you know what you are doing, but for brevity of this article, I will close guides and this will bring you to your Realm Dashboard as shown Please keep a note of the “App Id”. This will be needed when you create the Android Studio project. There are plethora of cloud services that comes with MongoDB Realm. You can use functions, triggers, and other features depending on your app use cases. For this article, you will be using Authentication. With this, you are finished with Part A. Yayyy!! :D ## Part B: ### Step 1. Creating an Android Studio Project I presume you all have experience creating mobile applications using Android Studio. In this step, you would "Start a new Android Project." You can enter any name of your choice and select Kotlin as the language and min API 21. Once you create the project, you need to add dependencies for the Realm Database and Google Authentication. **For Realm**, add this line of code in the project-level `build.gradle` file. This is the latest version at the time of writing this article. **Edit 01:** Plugin is updated to current (may change again in future). ``` java classpath "io.realm:realm-gradle-plugin:10.9.0" ``` After adding this, the dependencies block would look like this. ``` java dependencies { classpath "com.android.tools.build:gradle:4.0.0" classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" classpath "io.realm:realm-gradle-plugin:10.9.0" // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } ``` Now we add Realm plugin and Google Authentication in the app-level `build.gradle` file. Add this code at the top of the file but below the `kotlin-kapt` extension. If you are using Java, then this would come after the Android plugin. ``` java apply plugin: 'kotlin-kapt' apply plugin: 'realm-android' ``` In the same file, we would also add the below code to enable the Realm sync in the application. You can add it anywhere inside the Android block. ``` java android { ... ... realm { syncEnabled = true } ... } ``` For Google Auth, add the following dependency in the app-level gradle file. Please note, the versions may change since the time of this article. **Edit 02:** gms version is updated to current. ``` java dependencies{ ... ... //Google OAuth implementation 'com.google.android.gms:play-services-auth:20.0.1' ... } ``` With this, we are finished with Step 1. Let's move onto the next step to implement Google Authentication in the project. ### Step 2. Adding Google Authentication to the Application Now, I will not get into too much detail on implementing Google Authentication to the app since that will deviate from our main topic. I have listed below the set of steps I took and links I followed to implement Google Authentication in my app. 1. Configure a Google API Console project. (Create credentials for Android Application and Web Application). Your credential screen should have 2 oAuth Client IDs. 2. Configure Google Sign-in and the GoogleSignInClient object (in the Activity's onCreate method). 3. Add the Google Sign-in button to the layout file. 4. Implement Sign-in flow. This is what the activity will look like at the end of the four steps. >**Please note**: This is only a guideline. Your variable names and views can be different. The String server_client_id here is the web client-id you created in Google Console when you created Google Auth credentials in the Google Console Project. ``` java class MainActivity : AppCompatActivity() { private lateinit var client: GoogleSignInClient override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val googleSignInOptions = GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN) .requestEmail() .requestServerAuthCode(getString(R.string.server_client_id)) .build() client = GoogleSignIn.getClient(this, googleSignInOptions) findViewById(R.id.sign_in_button).setOnClickListener{ signIn() } } override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) { super.onActivityResult(requestCode, resultCode, data) if(requestCode == 100){ val task = GoogleSignIn.getSignedInAccountFromIntent(data) val account = task.getResult(ApiException::class.java) handleSignInResult(account) } } private fun handleSignInResult(account: GoogleSignInAccount?) { try{ Log.d("MainActivity", "${account?.serverAuthCode}") //1 val idToken = account?.serverAuthCode //signed in successfully, forward credentials to MongoDB realm //2 val googleCredentials = Credentials.google(idToken) //3 app.loginAsync(googleCredentials){ if(it.isSuccess){ Log.d("MainActivity", "Successfully authenticated using Google OAuth") //4 startActivity(Intent(this, SampleResult::class.java)) } else { Log.d("MainActivity", "Failed to Log in to MongoDB Realm: ${it.error.errorMessage}") } } } catch(exception: ApiException){ Log.d("MainActivity", exception.printStackTrace().toString()) } } private fun signIn() { val signIntent = client.signInIntent startActivityForResult(signIntent, 100) } } ``` When you run your app, your app should ask you to sign in with your Google account, and when successful, it should open SampleResult Activity. I displayed a random text to show that it works. :D Now, we will move onto the next step and configure the Google Auth provider on the MongoDB Realm cloud account. ### Step 3. Configure Google Auth Provider on MongoRealm UI Return to the MongoDB Realm account where you created your Realm app. On the left panel, click on the Authentication tab and you will see the list of auth providers that MongoDB Realm supports. Click on the *edit* icon corresponding to Google Authentication provider and you will be led to a page as shown below. **Edit 03:** Updated Screenshot as there is now a new option OpenID connect. Toggle the **Provider Enabled** switch to **On** and enter the **Web-Client ID** and **Web Client Secret** from the Google Console Project you created above. You can choose the Metadata Fields as per your app use case and click Save. > Keeping old UI here as OpenID Connect is not used. > With this, we are finished with Step 3. ### Step 4. Implementing Google Auth Sync Credentials to the Project This is the last step of Part 2. We will use the Google Auth token received upon signing in with our Google Account in the previous step to authenticate to our MongoDB Realm account. We already added dependencies for Realm in Step 3 and we created a Realm app on the back end in Step 2. Now, we initialize Realm and use the appId (Remember I asked you to make a note of the app Id? Check Step 2. ;)) to connect back end with our mobile app. Create a new Kotlin class that extends the application class and write the following code onto it. ``` java val appId ="realmsignin-abyof" // Enter your own App Id here lateinit var app: App class RealmApp: Application() { override fun onCreate() { super.onCreate() Realm.init(this) app = App(AppConfiguration.Builder(appId).build()) } } ``` An "App" is the main client-side entry point for interacting with the MongoDB Realm app and all its features, so we configure it in the application subclass for getting global access to the variable. This is the simplest way to configure it. After configuring the "App", you can add authentication, manage users, open synchronized realms, and all other functionalities that MongoDB Realm offers. To add more details when configuring, check the MongoDB Realm Java doc. Don't forget to add the RealmApp class or whatever name you chose to the manifest file. ``` java .... .... ... ... ``` Now come back to the `handleSignInResult()` method call in the MainActivity, and add the following code to that method. ``` java private fun handleSignInResult(account: GoogleSignInAccount?) { try{ Log.d("MainActivity", "${account?.serverAuthCode}") // Here, you get the serverAuthCode after signing in with your Google account. val idToken = account?.serverAuthCode // signed in successfully, forward credentials to MongoDB realm // In this statement, you pass the token received to ``Credentials.google()`` method to pass it to MongoDB Realm. val googleCredentials = Credentials.google(idToken) // Here, you login asynchronously by passing Google credentials to the method. app.loginAsync(googleCredentials){ if(it.isSuccess){ Log.d("MainActivity", "Successfully authenticated using Google OAuth") // If successful, you navigate to another activity. This may give a red mark because you have not created SampleResult activity. Create an empty activity and name it SampleResult. startActivity(Intent(this, SampleResult::class.java)) } else { Log.d("MainActivity", "Failed to Log in to MongoDB Realm: ${it.error.errorMessage}") } } } catch(exception: ApiException){ Log.d("MainActivity", exception.printStackTrace().toString()) } } ``` Add a TextView with a Successful Login message to the SampleResult layout file. Now, when you run your app, log in with your Google account and your SampleResult Activity with Successful Login message should be shown. When you check the App Users section in your MongoDB Realm account, you should notice one user created. ## Wrapping Up You can get the code for this tutorial from this GitHub repo. Well done, everyone. We are finished with implementing Google Auth with MongoDB Realm, and I would love to know if you have any feedback for me.❤ You can post questions on MongoDB Community Forums or if you are struggling with any topic, please feel free to reach out. In the next article, I talk about how to implement Realm Sync in your Android Application.
md
{ "tags": [ "Kotlin", "Realm", "Google Cloud", "Android", "Mobile" ], "pageDescription": "Getting Started with MongoDB Realm and Implementing Google Authentication in Your Android App", "contentType": "Tutorial" }
Start Implementing Google Auth With MongoDB Realm in Your Android App
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/realm/realm-cocoa-swiftui-combine
created
# Realm Cocoa 5.0 - Multithreading Support with Integration for SwiftUI & Combine After three years of work, we're proud to announce the public release of Realm Cocoa 5.0, with a ground-up rearchitecting of the core database. In the time since we first released the Realm Mobile Database to the world in 2014, we've done our best to adapt to how people have wanted to use Realm and help our users build better apps, faster. Some of the difficulties developers ran into came down to some consequences of design decisions we made very early on, so in 2017 we began a project to rethink our core architecture. In the process, we came up with a new design that simplified our code base, improves performance, and lets us be more flexible around multi-threaded usage. In case you missed a similar writeup for Realm Java with code examples you can find it here. > **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free! ## Frozen Objects One of the big new features this enables is Frozen Objects. One of the core ideas of Realm is our concept of live, thread-confined objects that reduce the code mobile developers need to write. Objects are the data, so when the local database is updated for a particular thread, all objects are automatically updated too. This design ensures you have a consistent view of your data and makes it extremely easy to hook the local database up to the UI. But it came at a cost for developers using reactive frameworks. Sometimes Live Objects don't work well with Functional Reactive Programming (FRP) where you typically want a stream of immutable objects. This means that Realm objects have to be confined to a single thread. Frozen Objects solve both of these problems by letting you obtain an immutable snapshot of an object or collection which is fully thread-safe, *without* copying it out of the realm. This is especially important with Apple's release of Combine and SwiftUI, which are built around many of the ideas of Reactive programming. For example, suppose we have a nice simple list of Dogs in SwiftUI: ``` Swift class Dog: Object, ObjectKeyIdentifable { @objc dynamic var name: String = "" @objc dynamic var age: Int = 0 } struct DogList: View { @ObservedObject var dogs: RealmSwift.List var body: some View { List { ForEach(dogs) { dog in Text(dog.name) } } } } ``` If you've ever tried to use Realm with SwiftUI, you can probably see a problem here: SwiftUI holds onto references to the objects passed to `ForEach()`, and if you delete an object from the list of dogs it'll crash with an index out of range error. Solving this used to involve complicated workarounds, but with Realm Cocoa 5.0 is as simple as freezing the list passed to `ForEach()`: ``` swift struct DogList: View { @ObservedObject var dogs: RealmSwift.List var body: some View { List { ForEach(dogs.freeze()) { dog in Text(dog.name) } } } } ``` Now let's suppose we want to make this a little more complicated, and group the dogs by their age. In addition, we want to do the grouping on a background thread to minimize the amount of work done on the main thread. Fortunately, Realm Cocoa 5.0 makes this easy: ``` swift struct DogGroup { let label: String let dogs: Dog] } final class DogSource: ObservableObject { @Published var groups: [DogGroup] = [] private var cancellable: AnyCancellable? init() { cancellable = try! Realm().objects(Dog.self) .publisher .subscribe(on: DispatchQueue(label: "background queue")) .freeze() .map { dogs in Dictionary(grouping: dogs, by: { $0.age }).map { DogGroup(label: "\($0)", dogs: $1) } } .receive(on: DispatchQueue.main) .assertNoFailure() .assign(to: \.groups, on: self) } deinit { cancellable?.cancel() } } struct DogList: View { @EnvironmentObject var dogs: DogSource var body: some View { List { ForEach(dogs.groups, id: \.label) { group in Section(header: Text(group.label)) { ForEach(group.dogs) { dog in Text(dog.name) } } } } } } ``` Because frozen objects aren't thread-confined, we can subscribe to change notifications on a background thread, transform the data to a different form, and then pass it back to the main thread without any issues. ## Combine Support You may also have noticed the `.publisher` in the code sample above. [Realm Cocoa 5.0 comes with basic built-in support for using Realm objects and collections with Combine. Collections (List, Results, LinkingObjects, and AnyRealmCollection) come with a `.publisher` property which emits the collection each time it changes, along with a `.changesetPublisher` property that emits a `RealmCollectionChange` each time the collection changes. For Realm objects, there are similar `publisher()` and `changesetPublisher()` free functions which produce the equivalent for objects. For people who want to use live objects with Combine, we've added a `.threadSafeReference()` extension to `Publisher` which will let you safely use `receive(on:)` with thread-confined types. This lets you write things like the following code block to easily pass thread-confined objects or collections between threads. ``` swift publisher(object) .subscribe(on: backgroundQueue) .map(myTransform) .threadSafeReference() .receive(on: .main) .sink {print("\($0)")} ``` ## Queue-confined Realms Another threading improvement coming in Realm Cocoa 5.0 is the ability to confine a realm to a serial dispatch queue rather than a thread. A common pattern in Swift is to use a dispatch queue as a lock which guards access to a variable. Historically, this has been difficult with Realm, where queues can run on any thread. For example, suppose you're using URLSession and want to access a Realm each time you get a progress update. In previous versions of Realm you would have to open the realm each time the callback is invoked as it won't happen on the same thread each time. With Realm Cocoa 5.0 you can open a realm which is confined to that queue and can be reused: ``` swift class ProgressTrackingDelegate: NSObject, URLSessionDownloadDelegate { public let queue = DispatchQueue(label: "background queue") private var realm: Realm! override init() { super.init() queue.sync { realm = try! Realm(queue: queue) } } public var operationQueue: OperationQueue { let operationQueue = OperationQueue() operationQueue.underlyingQueue = queue return operationQueue } func urlSession(_ session: URLSession, downloadTask: URLSessionDownloadTask, didWriteData bytesWritten: Int64, totalBytesWritten: Int64, totalBytesExpectedToWrite: Int64) { guard let url = downloadTask.originalRequest?.url?.absoluteString else { return } try! realm.write { let progress = realm.object(ofType: DownloadProgress.self, forPrimaryKey: url) if let progress = progress { progress.bytesWritten = totalBytesWritten } else { realm.create(DownloadProgress.self, value: "url": url, "bytesWritten": bytesWritten ]) } } } } let delegate = ProgressTrackingDelegate() let session = URLSession(configuration: URLSessionConfiguration.default, delegate: delegate, delegateQueue: delegate.operationQueue) ``` You can also have notifications delivered to a dispatch queue rather than the current thread, including queues other than the active one. This is done by passing the queue to the observe function: `let token = object.observe(on: myQueue) { ... }`. ## Performance With [Realm Cocoa 5.0, we've greatly improved performance in a few important areas. Sorting Results is roughly twice as fast, and deleting objects from a Realm is as much as twenty times faster than in 4.x. Object insertions are 10-25% faster, with bigger gains being seen for types with primary keys. Most other operations should be similar in speed to previous versions. Realm Cocoa 5.0 should also typically produce smaller Realm files than previous versions. We've adjusted how we store large binary blobs so that they no longer result in files with a large amount of empty space, and we've reduced the size of the transaction log that's written to the file. ## Compatibility Realm Cocoa 5.0 comes with a new version of the Realm file format. Any existing files that you open will be automatically upgraded to the new format, with the exception of read-only files (such as those bundled with your app). Those will need to be manually upgraded, which can be done by opening them in Realm Studio or recreating them through whatever means you originally created the file. The upgrade process is one-way, and realms cannot be converted back to the old file format. Only minor API changes have been made, and we expect most applications which did not use any deprecated functions will compile and work with no changes. You may notice some changes to undocumented behavior, such as that deleting objects no longer changes the order of objects in an unsorted `Results`. Pre-1.0 Realms containing `Date` or `Any` properties can no longer be opened. Want to try it out for yourself? Check out our working demo app using Frozen Objects, SwiftUI, and Combine. - Simply clone the realm-cocoa repo and open `RealmExamples.xworkspace` then select the `ListSwiftUI` app in Xcode and Build. ## Wrap Up We're very excited to finally get these features out to you and to see what new things you'll be able to build with them. Stay tuned for more exciting new features to come; the investment in the Realm Database continues. ## Links Want to learn more? Review the documentation.. Ready to get started? Get Realm Core 6.0 and the SDKs. Want to ask a question? Head over to our MongoDB Realm Developer Community Forums.
md
{ "tags": [ "Realm", "Swift", "iOS" ], "pageDescription": "Public release of Realm Cocoa 5.0, with a ground-up rearchitecting of the core database", "contentType": "News & Announcements" }
Realm Cocoa 5.0 - Multithreading Support with Integration for SwiftUI & Combine
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/build-animated-timeline-chart-embedding-sdk
created
# How to Build an Animated Timeline Chart with the MongoDB Charts Embedding SDK The Charts Embedding SDK allows you to embed data visualizations in your application effortlessly, giving users and developers control over embedded charts. It can be a powerful tool, especially when bound to user actions. My goal today is to show you the Embedding SDK in action. This is just scratching the surface of what you can build with the SDK, and I hope this helps spark ideas as to its use within your applications. If you want to read more about the SDK, make sure to check the npm package page. Reading this blog post will give you a practical example of how to build a timeline chart in your application using the Embedding SDK. ## What is a timeline chart? A timeline chart is an effective way to visualize a process or events in chronological order. A good example might be showing population growth over time, or temperature readings per second from an IOT device. At the moment of writing this, we support 23 chart types in MongoDB Charts, and a timeline chart is not one of them. Thanks to the Charts Embedding SDK and a bit of code, we can build similar behaviour on our own, and I think that's a great example of how flexible the SDK is. It allows us to programmatically change an embedded chart using filters and setting different configurations. We will build a timeline chart in three steps: 1. Create the static chart in MongoDB Charts 2. Embed the chart in your application 3. Programmatically manage the chart's behaviour with the Embedding SDK to show the data changes over time I've done these three steps for a small example application that is presenting a timeline of the Olympic Games, and it shows the Olympic medals per country during the whole history of the Olympics (data sourced from Kaggle). I'm using two charts — a geospatial and a bar chart. They give different perspectives of how the data changes over time, to see where the medals are distributed, and the magnitude of wins. The slider allows the user to move through time. Watching the time lapse, you can see some insights about the data that you wouldn't have noticed if that was a static chart. Here are some observations: - Greece got most of the medals in the first Olympics (Athens, 1896) and France did the same in the second Olympics (Paris, 1900), so it looks like being a host boosts your performance. - 1924 was a very good year for most Nordic countries - we have Sweden at 3rd place, Norway(6th), Denmark(7th) and Finland(8th). If you watch Sweden closely, you will see that it was in top 5 most of the time. - Russia (which includes the former USSR in this dataset) got in top 8 for the first time hardly in 1960 but caught up quickly and is 3rd in the overall statistics. - Australia reached top 8 in 2008 and have kept that position since. - The US was a leader almost the entire time of the timeline. Here is how I built it in more details: ## Step 1: Create the chart in MongoDB Charts You have to create the chart you intend to be part of the timeline you are building. The easiest way to do that is to use MongoDB Atlas with a free tier cluster. Once your data is loaded into your cluster, you can activate Charts in your project and start charting. If you haven't used Charts before, you can check the steps to create a chart in this blog post here, or you can also follow the tutorials in our comprehensive documentation. Here are the two charts I've created on my dashboard, that I will embed in my example application: We have a bar chart that shows the first 8 countries ordered by the accumulated sum of medals they won in the history of the Olympics. :charts]{url=https://charts.mongodb.com/charts-data-science-project-aygif id=ff518bbb-923c-4c2c-91f5-4a2b3137f312 theme=light} And there is also a geospatial chart that shows the same data but on the map. :charts[]{url=https://charts.mongodb.com/charts-data-science-project-aygif id=b1983061-ee44-40ad-9c45-4bb1d4e74884 theme=light} So we have these two charts, and they provide a good view of the overall data without any filters. It will be more impressive to see how these numbers progressed for the timeline of the Olympics. For this purpose, I've embedded these two charts in my application, where thanks to the Embedding SDK, I will programmatically control their behaviour using a [filter on the data. ## Step 2: Embedding the charts You also have to allow embedding for the data and the charts. To do that at once, open the menu (...) on the chart and select "Embed Chart": Since this data is not sensitive, I've enabled unauthenticated embedding for each of my two charts with this toggle shown in the image below. For more sensitive data you should choose the Authenticated option to restrict who can view the embedded charts. Next, you have to explicitly allow the fields that will be used in the filters. You do that in the same embedding dialog that was shown above. Filtering an embedded chart is only allowed on fields you specify and these have to be set up in advance. Even if you use unauthenticated embedding, you still control the security over your data, so you can decide what can be filtered. In my case, this is just one field - the "year" field because I'm setting filters on the different Olympic years and that's all I need for my demo. ## Step 3: Programmatically control the charts in your app This is the step that includes the few lines of code I mentioned above. The example application is a small React application that has the two embedded charts that you saw earlier positioned side-by-side. There is a slider on the top of the charts. This slider moves through the timeline and shows the sum of medals the countries have won by the relevant year. In the application, you can navigate through the years yourself by using the slider, however there is also a play button at the top right, which presents everything in a timelapse manner. How the slider works is that every time it changes position, I set a filter to the embedded charts using the SDK method `setFilter`. For example, if the slider is at year 2016, it means there is a filter that gets all data for the years starting from the beginning up until 2016. ``` javascript // This function is creating the filter that will be executed on the data. const getDataFromAllPreviousYears = (endYear) => { let filter = { $and: { Year: { $gte: firstOlympicsYear } }, { Year: { $lte: endYear } }, ], }; return Promise.all([ geoChart.setFilter(filter), barChart.setFilter(filter), ]); }; ``` For the play functionality, I'm doing the same thing - changing the filter every 2 seconds using the Javascript function setInterval to schedule a function call that changes the filter every 2 seconds. ``` javascript // this function schedules a filter call with the specified time interval const setTimelineInterval = () => { if (playing) { play(); timerIdRef.current = setInterval(play, timelineInterval); } else { clearInterval(timerIdRef.current); } }; ``` In the geospatial map, you can zoom to an area of interest. Europe would be an excellent example as it has a lot of countries and that makes the geospatial chart look more dynamic. You can also pause the auto-forwarding at any moment and resume or even click forwards or backwards to a specific point of interest. ## Conclusion The idea of making this application was to show how the Charts Embedding SDK can allow you to add interactivity to your charts. Doing timeline charts is not a feature of the Embedding SDK, but it perfectly demonstrates that with a little bit of code, you can do different things with your charts. I hope you liked the example and got an idea of how powerful the SDK is. The whole code example can be seen in [this repo. All you need to do to run it is to clone the repo, run `npm install` and `npm start`. Doing this will open the browser with the timeline using my embedded charts so you will see a working example straight away. If you wish to try this using your data and charts, I've put some highlights in the example code of what has to be changed. You can jump-start your ideas by signing up for MongoDB Cloud, deploying a free Atlas cluster, and activating MongoDB Charts. Feel free to check our documentation and explore more embedding example apps, including authenticated examples if you wish to control who can see your embedded charts. We would also love to see how you are using the Embedding SDK. If you have suggestions on how to improve anything in Charts, use the MongoDB Feedback Engine. We use this feedback to help improve Charts and figure out what features to build next. Happy Charting!
md
{ "tags": [ "Atlas", "JavaScript" ], "pageDescription": "Learn how to build an animated timeline chart with the MongoDB Charts Embedding SDK", "contentType": "Tutorial" }
How to Build an Animated Timeline Chart with the MongoDB Charts Embedding SDK
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/connectors/tuning-mongodb-kafka-connector
created
# Tuning the MongoDB Connector for Apache Kafka MongoDB Connector for Apache Kafka (MongoDB Connector) is an open-source Java application that works with Apache Kafka Connect enabling seamless data integration of MongoDB with the Apache Kafka ecosystem. When working with the MongoDB Connector, the default values cover a great variety of scenarios, but there are some scenarios that require more fine-grained tuning. In this article, we will walk through important configuration properties that affect the MongoDB Kafka Source and Sink Connectors performance, and share general recommendations. ## Tuning the source connector Let’s first take a look at the connector when it is configured to read data from MongoDB and write it into a Kafka topic. When you configure the connector this way, it is known as a “source connector.” When the connector is configured as a source, a change stream is opened within the MongoDB cluster based upon any configuration you specified, such as pipeline. These change stream events get read into the connector and then written out to the Kafka topic, and they resemble the following: ``` { _id : { }, "operationType" : "", "fullDocument" : { }, "ns" : { "db" : "", "coll" : "" }, "to" : { "db" : "", "coll" : "" }, "documentKey" : { "_id" : }, "updateDescription" : { "updatedFields" : { }, "removedFields" : "", ... ], "truncatedArrays" : [ { "field" : , "newSize" : }, ... ] }, "clusterTime" : , "txnNumber" : , "lsid" : { "id" : , "uid" : } } ``` The connector configuration properties help define what data is written out to Kafka. For example, consider the scenario where we insert into MongoDB the following: ``` Use Stocks db.StockData.insertOne({'symbol':'MDB','price':441.67,'tx_time':Date.now()}) ``` When **publish.full.document.only** is set to false (the default setting), the connector writes the entire event as shown below: ``` {"_id": {"_data": "826205217F000000022B022C0100296E5A1004AA1707081AA1414BB9F647FD49855EE846645F696400646205217FC26C3DE022E9488E0004"}, "operationType": "insert", "clusterTime": {"$timestamp": {"t": 1644503423, "i": 2}}, "fullDocument": {"_id": {"$oid": "6205217fc26c3de022e9488e"}, "symbol": "MDB", "price": 441.67, "tx_time": 1.644503423267E12}, "ns": {"db": "Stocks", "coll": "StockData"}, "documentKey": {"_id": {"$oid": "6205217fc26c3de022e9488e"}}}} } ``` When **publish.full.document.only** is set to true and we issue a similar statement, it looks like the following: ``` use Stocks db.StockData.insertOne({'symbol':'TSLA','price':920.00,'tx_time':Date.now()}) ``` We can see that the data written to the Kafka topic is just the changed document itself, which in this example, is an inserted document. ``` {"_id": {"$oid": "620524b89d2c7fb2a606aa16"}, "symbol": "TSLA", "price": 920, "tx_time": 1.644504248732E12}"} ``` ### Resume tokens Another import concept to understand with source connectors is resume tokens. Resume tokens make it possible for the connector to fail, get restarted, and resume where it left off reading the MongoDB change stream. Resume tokens by default are stored in a Kafka topic defined by the **offset.storage.topic** parameter (configurable at the Kafka Connect Worker level for distributed environments) or in the file system in a file defined by the **offset.storage.file.filename** parameter (configurable at the Kafka Connect Worker level for standalone environments). In the event that the connector has been offline and the underlying MongoDB oplog has rolled over, you may get an error when the connector restarts. Read the [Invalid Resume Token section of the online documentation to learn more about this condition. ### Configuration properties The full set of properties for the Kafka Source Connector can be found in the documentation. The properties that should be considered with respect to performance tuning are as follows: * **batch.size**: the cursor batch size that defines how many change stream documents are retrieved on each **getMore** operation. Defaults to 1,000. * **poll.await.time.ms**: the amount of time to wait in milliseconds before checking for new results on the change stream. Defaults to 5,000. * **poll.max.batch.size**: maximum number of source records to send to Kafka at once. This setting can be used to limit the amount of data buffered internally in the Connector. Defaults to 1,000. * **pipeline**: an array of aggregation pipeline stages to run in your change stream. Defaults to an empty pipeline that provides no filtering. * **copy.existing.max.threads**: the number of threads to use when performing the data copy. Defaults to the number of processors. * **copy.existing.queue.size**: the max size of the queue to use when copying data. This is buffered internally by the Connector. Defaults to 16,000. ### Recommendations The following are some general recommendations and considerations when configuring the source connector: #### Scaling the source One of the most common questions is how to scale the source connector. For scenarios where you have a large amount of data to be copied via **copy.existing**, keep in mind that using the source connector this way may not be the best way to move this large amount of data. Consider the process for copy.existing: * Store the latest change stream resume token. * Spin up a thread (up to **copy.existing.max.threads**) for each namespace that is being copied. * When all threads finish, the resume tokens are read, written, and caught up to current time. While technically, the data will eventually be copied, this process is relatively slow. And if your data size is large and your incoming data is faster than the copy process, the connector may never get into a state where new data changes are handled by the connector. For high throughput datasets trying to be copied with copy.existing, a typical situation is overwriting the resume token stored in (1) due to high write activity. This breaks the copy.existing functionality, and it will need to be restarted, on top of dealing with the messages that were already processed to the Kafka topic. When this happens, the alternatives are: * Increase the oplog size to make sure the copy.existing phase can finish. * Throttle write activity in the source cluster until the copy.existing phase finishes. Another option for handling high throughput of change data is to configure multiple source connectors. Each source connector should use a **pipeline** and capture changes from a subset of the total data. Keep in mind that each time you create a source connector pointed to the same MongoDB cluster, it creates a separate change stream. Each change stream requires resources from the MongoDB cluster, and continually adding them will decrease server performance. That said, this degradation may not become noticeable until the amount of connectors reaches the 100+ range, so breaking your collections into five to 10 connector pipelines is the best way to increase source performance. In addition, using several different source connectors on the same namespace changes the total ordering of the data on the sink versus the original order of data in the source cluster. #### Tune the change stream pipeline When building your Kafka Source Connector configuration, ensure you appropriately tune the “pipeline” so that only wanted events are flowing from MongoDB to Kafka Connect, which helps reduce network traffic and processing times. For a detailed pipeline example, check out the Customize a Pipeline to Filter Change Events section of the online documentation. #### Adjust to the source cluster throughput Your Kafka Source Connector can be watching a set of collections with a low volume of events, or the opposite, a set of collections with a very high volume of events. In addition, you may want to tune your Kafka Source Connector to react faster to changes, reduce round trips to MongoDB or Kafka, and similar changes. With this in mind, consider adjusting the following properties for the Kafka Source Connector: * Adjust the value of **batch.size**: * Higher values mean longer processing times on the source cluster but fewer round trips to it. It can also increase the chances of finding relevant change events when the volume of events being watched is small. * Lower values mean shorter processing times on the source cluster but more round trips to it. It can reduce the chances of finding relevant change events when the volume of events being watched is small. * Adjust the value of **poll.max.batch.size**: * Higher values require more memory to buffer the source records with fewer round trips to Kafka. This comes at the expense of the memory requirements and increased latency from the moment a change takes place in MongoDB to the point the Kafka message associated with that change reaches the destination topic. * Lower values require less memory to buffer the source records with more round trips to Kafka. It can also help reduce the latency from the moment a change takes place in MongoDB to the point the Kafka message associated with that change reaches the destination topic. * Adjust the value of **poll.await.time.ms**: * Higher values can allow source clusters with a low volume of events to have any information to be sent to Kafka at the expense of increased latency from the moment a change takes place in MongoDB to the point the Kafka message associated with that change reaches the destination topic. * Lower values reduce latency from the moment a change takes place in MongoDB to the point the Kafka message associated with that change reaches the destination topic. But for source clusters with a low volume of events, it can prevent them from having any information to be sent to Kafka. This information is an overview of what to expect when changing these values, but keep in mind that they are deeply interconnected, with the volume of change events on the source cluster having an important impact too: 1. The Kafka Source Connector issues getMore commands to the source cluster using **batch.size**. 2. The Kafka Source Connector receives the results from step 1 and waits until either **poll.max.batch.size** or **poll.await.time.ms** is reached. While this doesn’t happen, the Kafka Source Connector keeps “feeding” itself with more getMore results. 3. When either **poll.max.batch.size** or **poll.await.time.ms** is reached, the source records are sent to Kafka. #### “Copy existing” feature When running with the **copy.existing** property set to **true**, consider these additional properties: * **copy.existing.queue.size**: the amount of records the Kafka Source Connector buffers internally. This queue and its size include all the namespaces to be copied by the “Copy Existing” feature. If this queue is full, the Kafka Source Connector blocks until space becomes available. * **copy.existing.max.threads**: the amount of concurrent threads used for copying the different namespaces. There is a one namespace to one thread mapping, so it is common to increase this up to the maximum number of namespaces being copied. If the number exceeds the number of cores available in the system, then the performance gains can be reduced. * **copy.existing.allow.disk.use**: allows the copy existing aggregation to use temporary disk storage if required. The default is set to true but should be set to false if the user doesn't have the permissions for disk access. #### Memory implications If you experience JVM “out of memory” issues on the Kafka Connect Worker process, you can try reducing the following two properties that control the amount of data buffered internally: * **poll.max.batch.size** * **copy.existing.queue.size**: applicable if the “copy.existing” property is set to true. It is important to note that lowering these values can result in unwanted impact. Adjusting the JVM Heap Size to your environment needs is recommended as long as you have available resources and the memory needs are not the result of memory leaks. ## Tuning the sink connector When the MongoDB Connector is configured as a sink, it reads from a Kafka topic and writes to a MongoDB collection. As with the source, there exists a mechanism to ensure offsets are stored in the event of a sink failure. Kafka connect manages this, and the information is stored in the __consumer_offsets topic. The MongoDB Connector has configuration properties that affect performance. They are as follows: * **max.batch.size**: the maximum number of sink records to batch together for processing. A higher number will result in more documents being sent as part of a single bulk command. Default value is 0. * **rate.limiting.every.n**: number of processed batches that trigger the rate limit. A value of 0 means no rate limiting. Default value is 0. In practice, this setting is rarely used. * **rate.limiting.timeout**: how long (in milliseconds) to wait before continuing to process data once the rate limit is reached. Default value is 0. This setting is rarely used. * **tasks.max**: the maximum number of tasks. Default value is 1. ### Recommendations #### Add indexes to your collections for consistent performance Writes performed by the sink connector take additional time to complete as the size of the underlying MongoDB collection grows. To prevent performance deterioration, use an index to support these write queries. #### Achieve as much parallelism as possible The Kafka Sink Connector (KSC) can take advantage of parallel execution thanks to the **tasks.max** property. The specified number of tasks will only be created if the source topic has the same number of partitions. Note: A partition should be considered as a logic group of ordered records, and the producer of the data determines what each partition contains. Here is the breakdown of the different combinations of number of partitions in the source topic and tasks.max values: **If working with more than one partition but one task:** * The task processes partitions one by one: Once a batch from a partition is processed, it moves on to another one so the order within each partition is still guaranteed. * Order among all the partitions is not guaranteed. **If working with more than one partition and an equal number of tasks:** * Each task is assigned one partition and the order is guaranteed within each partition. * Order among all the partitions is not guaranteed. **If working with more than one partition and a smaller number of tasks:** * The tasks that are assigned more than one partition process partitions one by one: Once a batch from a partition is processed, it moves on to another one so the order within each partition is still guaranteed. * Order among all the partitions is not guaranteed. **If working with more than one partition and a higher number of tasks:** * Each task is assigned one partition and the order is guaranteed within each partition. * KSC will not generate an excess number of tasks. * Order among all the partitions is not guaranteed. Processing of partitions may not be in order, meaning that Partition B may be processed before Partition A. All messages within the partition conserve strict order. Note: When using MongoDB to write CDC data, the order of data is important since, for example, you do not want to process a delete before an update on the same data. If you specify more than one partition for CDC data, you run the risk of data being out of order on the sink collection. #### Tune the bulk operations The Kafka Sink Connector (KSC) works by issuing bulk write operations. All the bulk operations that the KSC executes are, by default, ordered and as such, the order of the messages is guaranteed within a partition. See Ordered vs Unordered Operations for more information. Note: As of 1.7, **bulk.write.ordered**, if set to false, will process the bulk out of order, enabling more documents within the batch to be written in the case of a failure of a portion of the batch. The amount of operations that are sent in a single bulk command can have a direct impact on performance. You can modify this by adjusting **max.batch.size**: * A higher number will result in more operations being sent as part of a single bulk command. This helps improve throughput at the expense of some added latency. However, a very big number might result in cache pressure on the destination cluster. * A small number will ease the potential cache pressure issues which might be useful for destination clusters with fewer resources. However, throughput decreases, and you might experience consumer lag on the source topics as the producer might publish messages in the topic faster than the KSC processes them. * This value affects processing within each of the tasks of the KSC. #### Throttle the Kafka sink connector In the event that the destination MongoDB cluster is not able to handle consistent throughput, you can configure a throttling mechanism. You can do this with two properties: * **rate.limiting.every.n**: number of processed batches that should trigger the rate limit. A value of 0 means no rate limiting. * **rate.limiting.timeout**: how long (in milliseconds) to wait before continuing to process data once the rate limit is reached. The end result is that whenever the KSC writes **rate.limiting.every.n** number of batches, it waits **rate.limiting.timeout milliseconds** before writing the next batch. This allows a destination MongoDB cluster that cannot handle consistent throughput to recover before receiving new load from the KSC.
md
{ "tags": [ "Connectors", "Kafka" ], "pageDescription": "When building a MongoDB and Apache Kafka solution, the default configuration values satisfy many scenarios, but there are some tweaks to increase performance. In this article, we walk through important configuration properties as well as general best practice recommendations. ", "contentType": "Tutorial" }
Tuning the MongoDB Connector for Apache Kafka
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/javascript/client-side-field-level-encryption-csfle-mongodb-node
created
# How to use MongoDB Client-Side Field Level Encryption (CSFLE) with Node.js Have you ever had to develop an application that stored sensitive data, like credit card numbers or social security numbers? This is a super common use case for databases, and it can be a pain to save this data is secure way. Luckily for us there are some incredible security features that come packaged with MongoDB. For example, you should know that with MongoDB, you can take advantage of: - Network and user-based rules, which allows administrators to grant and restrict collection-level permissions for users. - Encryption of your data at rest, which encrypts the database files on disk. - Transport Encryption using TLS/SSL which encrypts data over the network. - And now, you can even have client-side encryption, known as client-side field level encryption (CSFLE). The following diagram is a list of MongoDB security features offered and the potential security vulnerabilities that they address: Client-side Field Level Encryption allows the engineers to specify the fields of a document that should be kept encrypted. Sensitive data is transparently encrypted/decrypted by the client and only communicated to and from the server in encrypted form. This mechanism keeps the specified data fields secure in encrypted form on both the server and the network. While all clients have access to the non-sensitive data fields, only appropriately-configured CSFLE clients are able to read and write the sensitive data fields. In this post, we will design a Node.js client that could be used to safely store select fields as part of a medical application. ## The Requirements There are a few requirements that must be met prior to attempting to use Client-Side Field Level Encryption (CSFLE) with the Node.js driver. - MongoDB Atlas 4.2+ or MongoDB Server 4.2 Enterprise - MongoDB Node driver 3.6.2+ - The libmongocrypt library installed (macOS installation instructions below) - The mongocryptd binary installed (macOS installation instructions below) > > >This tutorial will focus on automatic encryption. While this tutorial >will use MongoDB Atlas, you're >going to need to be using version 4.2 or newer for MongoDB Atlas or >MongoDB Enterprise Edition. You will not be able to use automatic field >level encryption with MongoDB Community Edition. > > The assumption is that you're familiar with developing Node.js applications that use MongoDB. If you want a refresher, take a look at the quick start series that we published on the topic. ## Installing the Libmongocrypt and Mongocryptd Binaries and Libraries Because of the **libmongocrypt** and **mongocryptd** requirements, it's worth reviewing how to install and configure them. We'll be exploring installation on macOS, but refer to the documentation for libmongocrypt and mongocryptd for your particular operating system. ### libmongocrypt **libmongocrypt** is required for automatic field level encryption, as it is the component that is responsible for performing the encryption or decryption of the data on the client with the MongoDB 4.2-compatible Node drivers. Now, there are currently a few solutions for installing the **libmongocrypt** library on macOS. However, the easiest is with Homebrew. If you've got Homebrew installed, you can install **libmongocrypt** with the following command: ``` bash brew install mongodb/brew/libmongocrypt ``` > > >I ran into an issue with libmongocrypt when I tried to run my code, >because libmongocrypt was trying to statically link against >libmongocrypt instead of dynamically linking. I have submitted an issue >to the team to fix this issue, but to fix it, I had to run: > > ``` bash export BUILD_TYPE=dynamic ``` ### mongocryptd **mongocryptd** is required for automatic field level encryption and is included as a component in the MongoDB Enterprise Server package. **mongocryptd** is only responsible for supporting automatic client-side field level encryption and does *not* perform encryption or decryption. You'll want to consult the documentation on how to obtain the **mongocryptd** binary as each operating system has different steps. For macOS, you'll want to download MongoDB Enterprise Edition from the MongoDB Download Center. You can refer to the Enterprise Edition installation instructions for macOS to install, but the gist of the installation involves extracting the TAR file and moving the files to the appropriate directory. By this point, all the appropriate components for client-side field level encryption should be installed or available. Make sure that you are running MongoDB enterprise on your client while using CSFLE, even if you are saving your data to Atlas. ## Project Setup Let's start by setting up all the files and dependencies we will need. In a new directory, create the following files, running the following command: ``` bash touch clients.js helpers.js make-data-key.js ``` Be sure to initialize a new NPM project, since we will be using several NPM dependencies. ``` bash npm init --yes ``` And let's just go ahead and install all the packages that we will be using now. ``` bash npm install -S mongodb mongodb-client-encryption node-gyp ``` > > >Note: The complete codebase for this project can be found here: > > > ## Create a Data Key in MongoDB for Encrypting and Decrypting Document Fields MongoDB Client-Side Field Level Encryption (CSFLE) uses an encryption strategy called envelope encryption in which keys used to encrypt/decrypt data (called data encryption keys) are encrypted with another key (called the master key). The following diagram shows how the **master key** is created and stored: > > >Warning > >The Local Key Provider is not suitable for production. > >The Local Key Provider is an insecure method of storage and is therefore >**not recommended** if you plan to use CSFLE in production. Instead, you >should configure a master key in a Key Management >System >(KMS) which stores and decrypts your data encryption keys remotely. > >To learn how to use a KMS in your CSFLE implementation, read the >Client-Side Field Level Encryption: Use a KMS to Store the Master >Key >guide. > > ``` javascript // clients.js const fs = require("fs") const mongodb = require("mongodb") const { ClientEncryption } = require("mongodb-client-encryption") const { MongoClient, Binary } = mongodb module.exports = { readMasterKey: function (path = "./master-key.txt") { return fs.readFileSync(path) }, CsfleHelper: class { constructor({ kmsProviders = null, keyAltNames = "demo-data-key", keyDB = "encryption", keyColl = "__keyVault", schema = null, connectionString = "mongodb://localhost:27017", mongocryptdBypassSpawn = false, mongocryptdSpawnPath = "mongocryptd" } = {}) { if (kmsProviders === null) { throw new Error("kmsProviders is required") } this.kmsProviders = kmsProviders this.keyAltNames = keyAltNames this.keyDB = keyDB this.keyColl = keyColl this.keyVaultNamespace = `${keyDB}.${keyColl}` this.schema = schema this.connectionString = connectionString this.mongocryptdBypassSpawn = mongocryptdBypassSpawn this.mongocryptdSpawnPath = mongocryptdSpawnPath this.regularClient = null this.csfleClient = null } /** * In the guide, https://docs.mongodb.com/ecosystem/use-cases/client-side-field-level-encryption-guide/, * we create the data key and then show that it is created by * retreiving it using a findOne query. Here, in implementation, we only * create the key if it doesn't already exist, ensuring we only have one * local data key. * * @param {MongoClient} client */ async findOrCreateDataKey(client) { const encryption = new ClientEncryption(client, { keyVaultNamespace: this.keyVaultNamespace, kmsProviders: this.kmsProviders }) await this.ensureUniqueIndexOnKeyVault(client) let dataKey = await client .db(this.keyDB) .collection(this.keyColl) .findOne({ keyAltNames: { $in: this.keyAltNames] } }) if (dataKey === null) { dataKey = await encryption.createDataKey("local", { keyAltNames: [this.keyAltNames] }) return dataKey.toString("base64") } return dataKey["_id"].toString("base64") } } ``` The following script generates a 96-byte, locally-managed master key and saves it to a file called master-key.txt in the directory from which the script is executed, as well as saving it to our impromptu key management system in Atlas. ``` javascript // make-data-key.js const { readMasterKey, CsfleHelper } = require("./helpers"); const { connectionString } = require("./config"); async function main() { const localMasterKey = readMasterKey() const csfleHelper = new CsfleHelper({ kmsProviders: { local: { key: localMasterKey } }, connectionString: "PASTE YOUR MONGODB ATLAS URI HERE" }) const client = await csfleHelper.getRegularClient() const dataKey = await csfleHelper.findOrCreateDataKey(client) console.log("Base64 data key. Copy and paste this into clients.js\t", dataKey) client.close() } main().catch(console.dir) ``` After saving this code, run the following to generate and save our keys. ``` bash node make-data-key.js ``` And you should get this output in the terminal. Be sure to save this key, as we will be using it in our next step. ![ It's also a good idea to check in to make sure that this data has been saved correctly. Go to your clusters in Atlas, and navigate to your collections. You should see a new key saved in the **encryption.\_\_keyVault** collection. Your key should be shaped like this: ``` json { "_id": "UUID('27a51d69-809f-4cb9-ae15-d63f7eab1585')", "keyAltNames": "demo-data-key"], "keyMaterial": "Binary('oJ6lEzjIEskH...', 0)", "creationDate": "2020-11-05T23:32:26.466+00:00", "updateDate": "2020-11-05T23:32:26.466+00:00", "status": "0", "masterKey": { "provider": "local" } } ``` ## Defining an Extended JSON Schema Map for Fields to be Encrypted With the data key created, we're at a point in time where we need to figure out what fields should be encrypted in a document and what fields should be left as plain text. The easiest way to do this is with a schema map. A schema map for encryption is extended JSON and can be added directly to the Go source code or loaded from an external file. From a maintenance perspective, loading from an external file is easier to maintain. The following table illustrates the data model of the Medical Care Management System. | **Field type** | **Encryption Algorithm** | **BSON Type** | |--------------------------|--------------------------|-------------------------------------------| | Name | Non-Encrypted | String | | SSN | Deterministic | Int | | Blood Type | Random | String | | Medical Records | Random | Array | | Insurance: Policy Number | Deterministic | Int (embedded inside insurance object) | | Insurance: Provider | Non-Encrypted | String (embedded inside insurance object) | Let's add a function to our **csfleHelper** method in helper.js file so our application knows which fields need to be encrypted and decrypted. ``` javascript if (dataKey === null) { throw new Error( "dataKey is a required argument. Ensure you've defined it in clients.js" ) } return { "medicalRecords.patients": { bsonType: "object", // specify the encryptMetadata key at the root level of the JSON Schema. // As a result, all encrypted fields defined in the properties field of the // schema will inherit this encryption key unless specifically overwritten. encryptMetadata: { keyId: [new Binary(Buffer.from(dataKey, "base64"), 4)] }, properties: { insurance: { bsonType: "object", properties: { // The insurance.policyNumber field is embedded inside the insurance // field and represents the patient's policy number. // This policy number is a distinct and sensitive field. policyNumber: { encrypt: { bsonType: "int", algorithm: "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } } } }, // The medicalRecords field is an array that contains a set of medical record documents. // Each medical record document represents a separate visit and specifies information // about the patient at that that time, such as their blood pressure, weight, and heart rate. // This field is sensitive and should be encrypted. medicalRecords: { encrypt: { bsonType: "array", algorithm: "AEAD_AES_256_CBC_HMAC_SHA_512-Random" } }, // The bloodType field represents the patient's blood type. // This field is sensitive and should be encrypted. bloodType: { encrypt: { bsonType: "string", algorithm: "AEAD_AES_256_CBC_HMAC_SHA_512-Random" } }, // The ssn field represents the patient's // social security number. This field is // sensitive and should be encrypted. ssn: { encrypt: { bsonType: "int", algorithm: "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } } } } ``` ## Create the MongoDB Client Alright, so now we have the JSON Schema and encryption keys necessary to create a CSFLE-enabled MongoDB client. Let's recap how our client will work. Our CSFLE-enabled MongoDB client will query our encrypted data, and the **mongocryptd** process will be automatically started by default. **mongocryptd** handles the following responsibilities: - Validates the encryption instructions defined in the JSON Schema and flags the referenced fields for encryption in read and write operations. - Prevents unsupported operations from being executed on encrypted fields. To create the CSFLE-enabled client, we need to instantiate a standard MongoDB client object with the additional automatic encryption settings with the following **code snippet**: ``` javascript async getCsfleEnabledClient(schemaMap = null) { if (schemaMap === null) { throw new Error( "schemaMap is a required argument. Build it using the CsfleHelper.createJsonSchemaMap method" ) } const client = new MongoClient(this.connectionString, { useNewUrlParser: true, useUnifiedTopology: true, monitorCommands: true, autoEncryption: { // The key vault collection contains the data key that the client uses to encrypt and decrypt fields. keyVaultNamespace: this.keyVaultNamespace, // The client expects a key management system to store and provide the application's master encryption key. // For now, we will use a local master key, so they use the local KMS provider. kmsProviders: this.kmsProviders, // The JSON Schema that we have defined doesn't explicitly specify the collection to which it applies. // To assign the schema, they map it to the medicalRecords.patients collection namespace schemaMap } }) return await client.connect() } ``` If the connection was successful, the client is returned. ## Perform Encrypted Read/Write Operations We now have a CSFLE-enabled client and we can test that the client can perform queries that meet our security requirements. ### Insert a Document with Encrypted Fields The following diagram shows the steps taken by the client application and driver to perform a write of field-level encrypted data: ![Diagram that shows the data flow for a write of field-level encrypted data We need to write a function in our clients.js to create a new patient record with the following **code snippet**: Note: Clients that do not have CSFLE configured will insert unencrypted data. We recommend using server-side schema validation to enforce encrypted writes for fields that should be encrypted. ``` javascript const { readMasterKey, CsfleHelper } = require("./helpers"); const { connectionString, dataKey } = require("./config"); const localMasterKey = readMasterKey() const csfleHelper = new CsfleHelper({ // The client expects a key management system to store and provide the application's master encryption key. For now, we will use a local master key, so they use the local KMS provider. kmsProviders: { local: { key: localMasterKey } }, connectionString, }) async function main() { let regularClient = await csfleHelper.getRegularClient() let schemeMap = csfleHelper.createJsonSchemaMap(dataKey) let csfleClient = await csfleHelper.getCsfleEnabledClient(schemeMap) let exampleDocument = { name: "Jon Doe", ssn: 241014209, bloodType: "AB+", medicalRecords: { weight: 180, bloodPressure: "120/80" } ], insurance: { provider: "MaestCare", policyNumber: 123142 } } const regularClientPatientsColl = regularClient .db("medicalRecords") .collection("patients") const csfleClientPatientsColl = csfleClient .db("medicalRecords") .collection("patients") // Performs the insert operation with the csfle-enabled client // We're using an update with an upsert so that subsequent runs of this script // don't insert new documents await csfleClientPatientsColl.updateOne( { ssn: exampleDocument["ssn"] }, { $set: exampleDocument }, { upsert: true } ) // Performs a read using the encrypted client, querying on an encrypted field const csfleFindResult = await csfleClientPatientsColl.findOne({ ssn: exampleDocument["ssn"] }) console.log( "Document retreived with csfle enabled client:\n", csfleFindResult ) // Performs a read using the regular client. We must query on a field that is // not encrypted. // Try - query on the ssn field. What is returned? const regularFindResult = await regularClientPatientsColl.findOne({ name: "Jon Doe" }) console.log("Document retreived with regular client:\n", regularFindResult) await regularClient.close() await csfleClient.close() } main().catch(console.dir) ``` ### Query for Documents on a Deterministically Encrypted Field The following diagram shows the steps taken by the client application and driver to query and decrypt field-level encrypted data: ![ We can run queries on documents with encrypted fields using standard MongoDB driver methods. When a doctor performs a query in the Medical Care Management System to search for a patient by their SSN, the driver decrypts the patient's data before returning it: ``` json { "_id": "5d6ecdce70401f03b27448fc", "name": "Jon Doe", "ssn": 241014209, "bloodType": "AB+", "medicalRecords": { "weight": 180, "bloodPressure": "120/80" } ], "insurance": { "provider": "MaestCare", "policyNumber": 123142 } } ``` If you attempt to query your data with a MongoDB that isn't configured with the correct key, this is what you will see: ![ And you should see your data written to your MongoDB Atlas database: ## Running in Docker If you run into any issues running your code locally, I have developed a Docker image that you can use to help you get setup quickly or to troubleshoot local configuration issues. You can download the code here. Make sure you have docker configured locally before you run the code. You can download Docker here. 1. Change directories to the Docker directory. ``` bash cd docker ``` 2. Build Docker image with a tag name. Within this directory, execute: ``` bash docker build . -t mdb-csfle-example ``` This will build a Docker image with a tag name *mdb-csfle-example*. 3. Run the Docker image by executing: ``` bash docker run -tih csfle mdb-csfle-example ``` The command above will run a Docker image with tag *mdb-csfle-example* and provide it with *csfle* as its hostname. 4. Once you're inside the Docker container, you can follow the below steps to run the NodeJS code example. ``` bash $ export MONGODB_URL="mongodb+srv://USER:[email protected]/dbname?retryWrites=true&w=majority" $ node ./example.js ``` Note: If you're connecting to MongoDB Atlas, please make sure to Configure Allowlist Entries. ## Summary We wanted to develop a system that securely stores sensitive medical records for patients. We also wanted strong data access and security guarantees that do not rely on individual users. After researching the available options, we determined that MongoDB Client-Side Field Level Encryption satisfies their requirements and decided to implement it in their application. To implement CSFLE, we did the following: **1. Created a Locally-Managed Master Encryption Key** A locally-managed master key allowed us to rapidly develop the client application without external dependencies and avoid accidentally leaking sensitive production credentials. **2. Generated an Encrypted Data Key with the Master Key** CSFLE uses envelope encryption, so we generated a data key that encrypts and decrypts each field and then encrypted the data key using a master key. This allows us to store the encrypted data key in MongoDB so that it is shared with all clients while preventing access to clients that don't have access to the master key. **3. Created a JSON Schema** CSFLE can automatically encrypt and decrypt fields based on a provided JSON Schema that specifies which fields to encrypt and how to encrypt them. **4. Tested and Validated Queries with the CSFLE Client** We tested their CSFLE implementation by inserting and querying documents with encrypted fields. We then validated that clients without CSFLE enabled could not read the encrypted data. ## Move to Production In this guide, we stored the master key in your local file system. Since your data encryption keys would be readable by anyone that gains direct access to your master key, we **strongly recommend** that you use a more secure storage location such as a Key Management System (KMS). ## Further Reading For more information on client-side field level encryption in MongoDB, check out the reference docs in the server manual: - Client-Side Field Level Encryption - Automatic Encryption JSON Schema Syntax - Manage Client-Side Encryption Data Keys - Comparison of Security Features - For additional information on the MongoDB CSFLE API, see the official Node.js driver documentation - Questions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.
md
{ "tags": [ "JavaScript", "MongoDB", "Node.js" ], "pageDescription": "Learn how to encrypt document fields client-side in Node.js with MongoDB client-side field level encryption (CSFLE).", "contentType": "Tutorial" }
How to use MongoDB Client-Side Field Level Encryption (CSFLE) with Node.js
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/mongo-socket-chat-example
created
HELLO WORLD FROM FILE
md
{ "tags": [ "MongoDB" ], "pageDescription": "If you're interested in how to integrate MongoDB eventing with Socket.io, this tutorial builds in the Socket.IO getting started guide to incorporate MongoDB.", "contentType": "Tutorial" }
Integrating MongoDB Change Streams with Socket.IO
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/sizing-mongodb-with-jay-runkel
created
# Hardware Sizing for MongoDB with Jay Runkel The process of determining the right amount of server resources for your application database is a bit like algebra. The variables are many and varied. Here are just a few: - The total amount of data stored - Number of collections - Number of documents in each collection - Size of each document - Activity against the database - Number and frequency of reads - Number and frequency of writes, updates, deletes - Data schema and indexes - Number of index entries, size of documents indexed - Users - Proximity to your database servers - Total number of users, the pattern of usage (see reads/writes) These are just a few and it's a bit tricky because the answer to one of these questions may depend entirely on the answer to another, whose answer depends on yet another. Herein lies the difficulty with performing a sizing exercise. One of the best at this part science, part art exercise is Jay Runkel. Jay joined us on the podcast to discuss the process and possibilities. This article contains the transcript of that episode. If you prefer to listen, here's a link to the episode on YouTube. :youtube]{vid=OgGLl5KZJQM} ## Podcast Transcript Michael Lynn (00:00): Welcome to the podcast. On this episode, we're talking about sizing. It's a difficult task sometimes to figure out how much server you need in order to support your application and it can cost you if you get it wrong. So we've got the experts helping us today. We're bringing in Jay Runkel. Jay Runkel is an executive solutions architect here at MongoDB. Super smart guy. He's been doing this quite some time. He's helped hundreds of customers size their instances, maybe even thousands. So a great conversation with Jay Runkel on sizing your MongoDB instances. I hope you enjoy the episode. Michael Lynn (00:55): Jay, how are you? It's great to see you again. It's been quite a while for us. Why don't you tell the audience who you are and what you do? Jay Runkel (01:02): So I am a executive solution architect at MongoDB. So MongoDB sales teams are broken up into two classes individual. There are the sales reps who handle the customer relationship, a lot of the business aspects of the sales. And there are solution architects who play the role of presales, and we handle a lot of the technical aspects of the sales. So I spend a lot of time working with customers, understanding their technical challenges and helping them understand how MongoDB can help them solve those technical challenges. Michael Lynn (01:34): That's an awesome role. I spent some time as a solution architect over the last couple of years, and even here at MongoDB, and it's just such a fantastic role. You get to help customers through their journey, to using MongoDB and solve some of their technical issues. So today we're going to focus on sizing, what it's like to size a MongoDB cluster, whether it be on prem in your own data center or in MongoDB Atlas, the database as a service. But before we get there, I'd like to learn a little bit more about what got you to this point, Jay. Where were you before MongoDB? What were you doing? And how is it that you're able to bridge the gap between something that requires the skills of a developer, but also sort of getting into that sales role? Jay Runkel (02:23): Yeah, so my training and my early career experience was as a developer and I did that for about five, six years and realized that I did not want to sit in front of a desk every day. So what I did was I started looking for other roles where I could spend a lot more time with customers. And I happened to give a presentation in front of a sales VP one time about 25 years ago. And after the meeting, he said, "Hey, I really need you to help support the sales team." And that kind of started my career in presales. And I've worked for a lot of different companies over the years, most recently related to MongoDB. Before MongoDB, I worked for MarkLogic where MarkLogic is another big, no SQL database. And I got most of my experience around document databases at MarkLogic, since they have an XML based document database. Michael Lynn (03:18): So obviously working with customers and helping them understand how to use MongoDB and the document model, that's pretty technical. But the sales aspect of it is almost on the opposite end of the personality spectrum. How do you find that? Do you find that challenging going between those two types of roles? Jay Runkel (03:40): For me, it kind of almost all blurs together. I think in terms of this role, it's technical but sales kind of all merged together. You're either, we can do very non-technical things where you're just trying to understand a customer's business pain and helping them understand how if they went from MongoDB solution, it would address those business pain. But also you can get down into technology as well and work with the developer and understand some technical challenges they have and how MongoDB can solve that pain as well. So to me, it seems really seamless and most interactions with customers start at that high level where we're really understanding the business situation and the pain and where they want to be in the future. And generally the conversation evolves to, "All right, now that we have this business pain, what are the technical requirements that are needed to achieve that solution to remove the pain and how MongoDB can deliver on those requirements?" Nic Raboy (04:41): So I imagine that you experience a pretty diverse set of customer requests. Like every customer is probably doing something really amazing and they need MongoDB for a very specific use case. Do you ever feel like stressed out that maybe you won't know how to help a particular customer because it's just so exotic? Jay Runkel (05:03): Yes, but that's like the great thing about the job. The great thing about being at MongoDB is that often customers look at MongoDB because they failed with something else, either because they built an app like an Oracle or Postgres or something like that and it's not performing, or they can't roll out new functionality fast enough, or they've just looked at the requirements for this new application they want to build and realize they can't build it on traditional data platforms. So yeah, often you can get in with a customer and start talking about a use case or problem they have, and in the beginning, you can be, "Geez, I don't know how we're ever going to solve this." But as you get into the conversation, you typically work and collaborate with the customer. They know their business, they know their technical infrastructure. You know MongoDB. And by combining those two sources of information, very often, not always, you can come up with a solution to solve the problem. But that's the challenge, that's what makes it fun. Nic Raboy (06:07): So would I be absolutely incorrect if I said something like you are more in the role of filling the gap of what the customer is looking for, rather than trying to help them figure out what they need for their problem? It sounds like they came from maybe say an another solution that failed for them, you said. And so they maybe have a rough idea of what they want to accomplish with the database, but you need to get them to that next step versus, "Hey, I've got this idea. How do I execute this idea?" kind of thing. Jay Runkel (06:36): Yeah, I would say some customers, it's pretty simple, pretty straightforward. Let's say we want to build the shopping cart application. There's probably hundreds or thousands of shopping cart applications built on MongoDB. It's pretty cookie cutter. That's not a long conversation. But then there are other customers that want to be able to process let's say 500,000 digital payments per second and have all of these requirements around a hundred percent availability, be able to have the application continue running without a hiccup if a whole data center goes down where you have to really dig in and understand their use case and all the requirements to a fine grain detail to figure out a solution that will work for them. In that case the DevOps role is often who we're talking to. Nic Raboy (07:20): Awesome. Michael Lynn (07:21): Yeah. So before we get into the technical details of exactly how you do what you do in terms of recommending the sizing for a deployment, let's talk a little bit about the possibilities around MongoDB deployments. Some folks may be listening and thinking, "Well, I've got this idea for an app and it's on my laptop now and I know I have to go to production at some point." What are the options they have for deploying MongoDB? >You can run MongoDB on your laptop, move it to a mainframe, move it to the cloud in [MongoDB Atlas, move it from one cloud provider to another within Atlas, and no modifications to your code besides the connection string. Jay Runkel (07:46): So MongoDB supports just about every major platform you can consider. MongoDB realm has a database for a mobile device. MongoDB itself runs on Microsoft and MAC operating systems. It runs on IBM mainframes. It runs on a variety of flavors of Linux. You can also run MongoDB in the cloud either yourself, you can spin up a AWS instance or an Azure instance and install MongoDB and run it. Or we also have our cloud solution called Atlas where we will deploy and manage your MongoDB cluster for you on the cloud provider of your choice. So you essentially have that whole range and you can pick the platform and you can essentially pick who's going to manage the cluster for you. Michael Lynn (08:34): Fantastic. I mean, the options are limitless and the great thing is, the thing that you really did mention there, but it's a consistent API across all of those platforms. So you can develop and build your application, which leverages MongoDB in whatever language you're working in and not have to touch that regardless of the deployment target you use. So right on your laptop, run it locally, download MongoDB server and run it on your laptop. Run it in a docker instance and then deploy to literally anywhere and not have to touch your code. Is that the case? Jay Runkel (09:07): That's absolutely the case. You can run it on your laptop, move it to a mainframe, move it to the cloud in Atlas, move it from one cloud provider to another within Atlas, and no modifications to your code besides the connection string. Michael Lynn (09:20): Fantastic. Nic Raboy (09:21): But when you're talking to customers, we have all of these options. How do you determine whether or not somebody should be on prem or somebody should be in Atlas or et cetera? Jay Runkel (09:32): That's a great question. Now, I think from a kind of holistic perspective, everybody should be on Atlas because who wants to spend energy resources managing a database when that is something that MongoDB has streamlined, automated, ensured that it's deployed with best practices, with the highest level of security possible? So that's kind of the ideal case. I think that's where most of our customers are going towards. Now, there are certain industries and certain customers that have certain security requirements or policies that prevent them from running in a cloud provider, and those customers are the ones that still do self managed on-prem. Nic Raboy (10:15): But when it comes to things that require, say the self managed on-prem, those requirements, what would they be? Like HIPAA and FERPA and all of those other security reasons? I believe Atlas supports that, right? Jay Runkel (10:28): Yes. But I would say even if the regulations that will explicitly allow organizations to be in the cloud, many times they have internal policies that are additionally cautious and don't even want to take the risks, so they will just stay on prem. Other options are, if you're a company that has historically been deployed within your own data centers, if you have the new application that you're building, if it's the only thing in the cloud and all your app servers are still within your own data centers, sometimes that doesn't make a lot of sense as well. Michael Lynn (11:03): So I want to clear something up. You did mention, and your question was around compliance. And I want to just make sure it's clear. There's no reason why someone who requires compliance can't deploy in an Atlas apart from something internally, some internal compliance. I mean, we're able to manage applications that require HIPAA and FERPA and all of those compliance constraints, right? Jay Runkel (11:27): Absolutely. We have financial services organizations, healthcare companies that are running their business, their core applications, within Atlas today, managing all sorts of sensitive data, PII, HIPAA data. So, yeah, that has been done and can be done given all of the security infrastructure provided by Atlas. Nic Raboy (11:48): Awesome. Michael Lynn (11:49): Just wanted to clear that up. Go ahead, Nic. Nic Raboy (11:51): I wanted to just point out as a plug here, for anyone who's listening to this particular podcast episode, we recorded a previous episode with Ken White, right Mike? Michael Lynn (12:01): Right. Nic Raboy (12:01): ... on the different security practices of MongoDB, in case you want to learn more. Michael Lynn (12:06): Yeah. Great. Okay. So we're a couple of minutes in already and I'm chomping at the bit to get into the heart of the matter around sizing. But before we jump into the technical details, let's talk about what is big, what is small and kind of set the stage for the possibilities. Jay Runkel (12:24): Okay. So big and small is somewhat relative, but MongoDB has customers that have a simple replica set with a few gigabytes of data to customers that manage upwards of petabytes of data in MongoDB clusters. And the number of servers there can range from three instances in a replica set that maybe have one gigabyte of RAM each to a cluster that has several hundred servers and is maybe 50 or a hundred shards, something like that. Michael Lynn (12:59): Wow. Okay. So a pretty big range. And just to clarify the glossary here, Jay's using terms like replica set. For those that are new to MongoDB, MongoDB has built in high availability and you can deploy multiple instances of MongoDB that work in unison to replicate the changes to the database and we call that a cluster or a replica set. Great. So let's talk about the approach to sizing. What do you do when you're approaching a new customer or a new deployment and what do you need to think about when you start to think about how to size and implementation? Jay Runkel (13:38): Okay. So before we go there, let's even kind of talk about what sizing is and what sizing means. So typically when we talk about sizing in MongoDB, we're really talking about how big of a cluster do we need to solve a customer's problem? Essentially, how much hardware do we need to devote to MongoDB so that the application will perform well? And the challenge around that is that often it's not obvious. If you're building an application, you're going to know roughly how much data and roughly how the users are going to interact with the application. And somebody wants to know how many servers do you need and how much RAM do they have on them? How many cores? How big should the disks be? So it's a non-obvious, it's a pretty big gap from what you know, to what the answers you need. So what I hope I can do today is kind of walk you through how you get there. Michael Lynn (14:32): Awesome. Please do. Jay Runkel (14:33): Okay. So let's talk about that. So there's a couple things that we want to get to, like we said. First of all, we want to figure out, is it a sharded cluster? Not like you already kind of defined what sharding is, essentially. It's a way of partitioning the data so that you can distribute the data across a set of servers, so that you can have more servers either managing the data or processing queries. So that's one thing. We want to figure out how many partitions, how many shards of the data we need. And then we also need to figure out what do the specifications of those servers look like? How much RAM should they have? How much CPU? How much disk? That type of thing. Jay Runkel (15:12): So the easiest way I find to deal with this is to break this process up into two steps. The first step is just figure out the total amount of RAM we need, the total number of cores, essentially, the total amount of disk space, that type of thing. Once we have the totals, we can then figure out how many servers we need to deliver on those totals. So for example, if we do some math, which I'll explain in a little bit, and we figure out that we need 500 gigabytes of RAM, then we can figure out that we need five shards if all of our servers have a hundred gigabytes of RAM. That's pretty much kind of the steps we're going to go through. Just figure out how much RAM, how much disk, how much IO. And then figure out how many servers we need to deliver on those totals. Michael Lynn (15:55): Okay. So some basic algebra, and one of the variables is the current servers that we have. What if we don't have servers available and that's kind of an open and undefined variable? Jay Runkel (16:05): Yes, so in Atlas, you have a lot of options. There's not just one. Often if we're deploying in some customer's data center, they have a standard pizza box that goes in a rack, so we know what that looks like, and we can design to that. In something like Atlas, it becomes a price optimization problem. So if we figure out that we need 500 gigabytes of RAM, like I said, we can figure out is it better to do 10 shards where each shard has 50 gigabytes of RAM? Is it cheaper basically? Or should we do five shards where each shard has a hundred gigabytes of RAM? So in Atlas it's like, you really just kind of experiment and find the price point that is the most effective. Michael Lynn (16:50): Gotcha, okay. Nic Raboy (16:52): But are we only looking at a price point that is effective? I mean, maybe I missed it, but what are we gaining or losing by going with the 50 gigabyte shards versus the hundred gigabytes shards? Jay Runkel (17:04): So there are some other considerations. One is backup and restore time. If you partition the data, if you shard the data more, each partition has less data. So if you think about like recovering from a disaster, it will be faster because you're going to restore a larger number of smaller servers. That tends to be faster than restoring a single stream, restoring a fewer larger servers. The other thing is, if you think about many of our customers grow over time, so they're adding shards. If you use shards of smaller machines, then every incremental step is smaller. So it's easier to right size the cluster because you can, in smaller chunks, you can add additional shards to add more capacity. Where if you have fewer larger shards, every additional shard is a much bigger step in terms of capacity, but also cost. Michael Lynn (18:04): Okay. So you mentioned sharding and we briefly touched on what that is. It's partitioning of the data. Do you always shard? Jay Runkel (18:12): I would say most of our customers do not shard. I mean, a single replica set, which is one shard can typically, this is again going to depend on the workload and the server side and all that. But generally we see somewhere around one to two terabytes of data on a single replica set as kind of the upper bounds. And most of our applications, I don't know the exact percentages, but somewhere 80 - 90% of MongoDB applications are below the one terabyte range. So most applications, you don't even have to worry about sharding. Michael Lynn (18:47): I love it because I love rules of thumb, things that we can think about that like kind of simplify the process. And what I got there was look, if you've got one terabyte of data or more under management for your cluster, you're typically going to want to start to think about sharding. Jay Runkel (19:02): Think about it. And it might not be necessary, but you might want to start thinking about it. Yes. Michael Lynn (19:06): Okay, great. Now we mentioned algebra and one of the variables was the server size and the resources available. Tell me about the individual elements on the server that we look at and and then we'll transition to like what the application is doing and how we overlay that. Jay Runkel (19:25): Okay. So when you like look at a server, there's a lot of specifications that you could potentially consider. It turns out that with MongoDB, again let's say 95% of the time, the only things you really need to worry about is how much disk space, how much RAM, and then how fast of an IO system you have, really how many IOPS you need. It turns out other things like CPU and network, while theoretically they could be bottlenecks, most of the time, they're not. Normally it's disk space RAM and IO. And I would say it's somewhere between 98, 99% of MongoDB applications, if you size them just looking at RAM, IOPS, and disk space, you're going to do a pretty good estimate of sizing and you'll have way more CPU, way more network than you need. Michael Lynn (20:10): All right. I'm loving it because we're, we're progressing. So super simple rule of thumb, look at the amount of their database storage required. If you've got one terabyte or more, you might want to do some more math. And then the next step would be, look at the disk space, the RAM and the speed of the disks or the IOPS, iOS per second required. Jay Runkel (20:29): Yeah. So IOPS is a metric that all IO device manufacturers provide, and it's really a measurement of how fast the IO system can randomly access blocks of data. So if you think about what a database does, MongoDB or any database, when somebody issues a query, it's really going around on disk and grabbing the random blocks of data that satisfy that query. So IOPS is a really good metric for sizing IO systems for database. Michael Lynn (21:01): Okay. Now I've heard the term working set, and this is crucial when you're talking about sizing servers, sizing the deployment for a specific application. Tell me about the working set, what it is and how you determine what it is. Jay Runkel (21:14): Okay. So we said that we had to size three things: RAM, the IOPS, and the disk space. So the working set really helps us determine how much RAM we need. So the definition of working set is really the size of the indexes plus the set of frequently accessed documents used by the application. So let me kind of drill into that a little bit. If you're thinking about any database, MongoDB included, if you want good performance, you want the stuff that is frequently accessed by the database to be in memory, to be in cache. And if it's not in cache, what that means is the server has to go to the disk, which is really slow, at least in comparison to RAM. So the more of that working set, the indexes and the frequently accessed documents fit into memory, the better performance is going to be. The reason why you want the indexes in memory is that just about every query, whether it is a fine query or an update, is going to have to use the indexes to find the documents that are going to be affected. And therefore, since every query needs to use the indexes, you want them to be in cache, so that performance is good. Michael Lynn (22:30): Yeah. That makes sense. But let's double click on this a little bit. How do I go about determining what the frequently accessed documents are? Jay Runkel (22:39): Oh, that's a great question. That's unfortunately, that's why there's a little bit of art to sizing, as opposed to us just shipping out a spreadsheet and saying, "Fill it out and you get the answer." So the frequently accessed documents, it's really going to depend upon your knowledge of the application and how you would expect it to be used or how users are using it if it's already an application that's in production. So it's really the set of data that is accessed all the time. So I can give you some examples and maybe that'll make it clear. Michael Lynn (23:10): Yeah, perfect. Jay Runkel (23:10): Let's say it's an application where customers are looking up their bills. Maybe it's a telephone company or cable company or something like that or Hulu, Netflix, what have you. Most of the time, people only care about the bills that they got this month, last month, maybe two months ago, three months ago. If you're somebody like me that used to travel a lot before COVID, maybe you get really far behind on your expense reports and you look back four or five months, but rarely ever passed that. So in that type of application, the frequently accessed documents are probably going to be the current month's bills. Those are the ones that people are looking at all the time, and the rest of the stuff doesn't need to be in cache because it's not accessed that often. Nic Raboy (23:53): So what I mean, so as far as the frequently accessed, let's use the example of the most recent bills. What if your application or your demand is so high? Are you trying to accommodate all most recent bills in this frequently accessed or are you further narrowing down the subset? Jay Runkel (24:13): I think the way I would look at it for that application specific, it's probably if you think about this application, let's say you've got a million customers, but maybe only a thousand are ever online at the same time, you really are just going to need the indexes plus the data for the thousand active users. If I log into the application and it takes a second or whatever to bring up that first bill, but everything else is really fast after that as I drill into the different rows in my bill or whatever, I'm happy. So that's typically what you're looking at is just for the people that are currently engaged in the system, you want their data to be in RAM. Michael Lynn (24:57): So I published an article maybe two or three years ago, and the title of the article was "Knowing the Unknowable." And that's a little bit of what we're talking about here, because you're mentioning things like indexes and you're mentioning things like frequently accessed documents. So this is obviously going to require that you understand how your data is laid out. And we refer to that as a schema. You're also going to have to have a good understanding of how you're indexing, what indexes you're creating. So tell me Jay, to what degree does sizing inform the schema or vice versa? Jay Runkel (25:32): So, one of the things that we do as part of the kind of whole MongoDB design process is make sizing as part of the design processes as you're suggesting. Because what can happen is, you can come up with a really great schema and figure out what index is you use, and then you can look at that particular design and say, "Wow, that's going to mean I'm going to need 12 shards." You can think about it a little bit further, come up with a different schema and say, "Oh, that one's only going to require two shards." So if you think about, now you've got to go to your boss and ask for hardware. If you need two shards, you're probably asking for six servers. If you have 12 shards, you're asking for 36 servers. I guarantee your boss is going to be much happier paying for six versus 36. So obviously it is definitely a trade off that you want to make certain. Schemas will perform better, they may be easier to develop, and they also will have different implications on the infrastructure you need. Michael Lynn (26:35): Okay. And so obviously the criticality of sizing is increased when you're talking about an on-prem deployment, because obviously to get a server into place, it's a purchase. You're waiting for it to come. You have to do networking. Now when we move to the cloud, it's somewhat reduced. And I want to talk a little bit about the flexibility that comes with a deployment in MongoDB Atlas, because we know that MongoDB Atlas starts at zero. We have a free forever instance, that's called an M0 tier and it goes all the way up to M700 with a whole lot of RAM and a whole lot of CPU. What's to stop me from saying, "Okay, I'm not really going to concentrate on sizing and maybe I'll just deploy in an M0 and see how it goes." >MongoDB Atlas offers different tiers of clusters with varying amounts of RAM, CPU, and disk. These tiers are labeled starting with M0 - Free, and continuing up to M700 with massive amounts of RAM and CPU. Each tier also offers differing sizes and speeds of disks. Jay Runkel (27:22): So you could, actually. That's the really fabulous thing about Atlas is you could deploy, I wouldn't start with M0, but you might start with an M10 and you could enable, there's kind of two features in Atlas. One will automatically scale up the disk size for you. So as you load more data, it will, I think as the disk gets about 90% full, it will automatically scale it up. So you could start out real small and just rely on Atlas to scale it up. And then similarly for the instance size itself, there's another feature where it will automatically scale up the instance as the workload. So as you start using more RAM and CPU, it will automatically scale the instance. So that it would be one way. And you could say, "Geez, I can just drop from this podcast right now and just use that feature and that's great." But often what people want is some understanding of the budget. What should they expect to spend in Atlas? And that's where the sizing comes in useful because it gives you an idea of, "What is my Atlas budget going to be?" Nic Raboy (28:26): I wanted to do another shameless plug here for a previous podcast episode. If you want to learn more about the auto-scaling functionality of Atlas, we actually did an episode. It's part of a series with Rez Con from MongoDB. So if this is something you're interested in learning more about, definitely check out that previous episode. Michael Lynn (28:44): Yeah, so auto-scaling, an incredible feature. So what I heard Jay, is that you could under deploy and you could manually ratchet up as you review the shards and look at the monitoring. Or you could implement a relatively small instance size and rely on MongoDB to auto-scale you into place. Jay Runkel (29:07): Absolutely, and then if your boss comes to you and says, "How much are we going to be spending in November on Atlas?" You might want to go through some of this analysis we've been talking about to figure out, "Well, what size instance do we actually need or where do I expect that list to scale us up to so that I can have some idea of what to tell my boss." Michael Lynn (29:27): Absolutely. That's the one end of the equation. The other end of the equation is the performance. So if you're under scaling and waiting for the auto-scale to kick in, you're most likely going to experience some pain on the user front, right? Jay Runkel (29:42): So it depends. If you have a workload that is going to take big steps up. I mean, there's no way for Atlas to know that right now, you're doing 10 queries a second and on Monday you're doing a major marketing initiative and you expect your user base to grow and starting Monday afternoon instead of 10 queries a second, you're going to have a thousand queries per second. There's no way for Atlas to predict that. So if that's the case, you should manually scale up the cluster in advance of that so you don't have problems. Alternatively, though, if you just, every day you're adding a few users and over time, they're loading more and more data, so the utilization is growing at a nice, steady, linear pace, then Atlas should be able to predict, "Hey, that trend is going to continue," and scale you up, and you should probably have a pretty seamless auto scale and good customer experience. Michael Lynn (30:40): So it sounds like a great safety net. You could do your, do your homework, do your sizing, make sure you're informing your decisions about the schema and vice versa, and then make a bet, but also rely on auto-scaling to select the minimum and also specify a maximum that you want to scale into. Jay Runkel (30:57): Absolutely. Michael Lynn (30:58): Wow. So we've covered a lot of ground. Nic Raboy (30:59): So I have some questions since you actually do interface with customers. When you're working with them to try to find a scaling solution or a sizing solution for them, do you ever come to the scenario where, you know what, the customer assumed that they're going to need all of this, but in reality, they need far less or the other way around? Jay Runkel (31:19): So I think both scenarios are true. I think there are customers that are used to using relational databases and doing sizings for those. And those customers are usually positively happy when they see how much hardware they need for MongoDB. Generally, given the fact that MongoDB is a document model and uses way far fewer joints that the server requirements to satisfy the same workload for MongoDB are significantly less than a relational database. I think we also run into customers though that have really high volume workloads and maybe have unrealistic budgetary expectations as well. Maybe it's their first time ever having to deal with the problem of the scale that they're currently facing. So sometimes that requires some education and working with that customer. Michael Lynn (32:14): Are there tools available that customers can use to help them in this process? >...typically the index size is 10% of the data size. But if you want to get more accurate, what you can do is there are tools out there, one's called Faker... Jay Runkel (32:18): So there's a couple of things. We talked about trying to figure out what our index sizes are and things like that. What if you don't, let's say you're just starting to design the application. You don't have any data. You don't know what the indexes are. It's pretty hard to kind of make these kinds of estimates. So there's a couple of things you can do. One is you can use some rule of thumbs, like typically the index size is 10% of the data size. But if you want to get more accurate, what you can do is there are tools out there, one's called Faker for Python. There's a website called Mockaroo where it enables you to just generate a dataset. You essentially provide one document and these tools or sites will generate many documents and you can load those into MongoDB. You can build your indexes. And then you can just measure how big everything is. So that's kind of some tools that give you the ability to figure out what at least the index size of the working set is going to be just by creating a dataset. Michael Lynn (33:16): Yeah. Love those tools. So to mention those again, I've used those extensively in sizing exercises. Mockaroo is a great online. It's just in a webpage and you specify the shape of the document that you want and the number of documents you want created. There's a free tier and then there's a paid tier. And then Faker is a JavaScript library I've used a whole lot to generate fake documents. Jay Runkel (33:37): Yeah. I think it's also available in Python, too. Michael Lynn (33:40): Oh, great. Yeah. Terrific. Nic Raboy (33:41): Yeah, this is awesome. If people have more questions regarding sizing their potential MongoDB clusters, are you active in the MongoDB community forums by chance? Jay Runkel (33:56): Yes, I definitely am. Feel free to reach out to me and I'd be happy to answer any of your questions. Nic Raboy (34:03): Yeah, so that's community.MongoDB.com for anyone who's never been to our forums before. Michael Lynn (34:09): Fantastic. Jay, we've covered a lot of ground in a short amount of time. I hope this was really helpful for developers. Obviously it's a topic we could talk about for a long time. We like to keep the episodes around 30 to 40 minutes. And I think we're right about at that time. Is there anything else that you'd like to share with folks listening in that want to learn about sizing? Jay Runkel (34:28): So I gave a presentation on sizing in MongoDB World 2017, and that video is still available. So if you just go to MongoDB's website and search for Runkel and sizing, you'll find it. And if you want to get an even more detailed view of sizing in MongoDB, you can kind of take a look at that presentation. Nic Raboy (34:52): So 2017 is quite some time ago in tech years. Is it still a valid piece of content? Jay Runkel (35:00): I don't believe I mentioned the word Atlas in that presentation, but the concepts are all still valid. Michael Lynn (35:06): So we'll include a link to that presentation in the show notes. Be sure to look for that. Where can people find you on social? Are you active in the social space? Jay Runkel (35:16): You can reach me at Twitter at @jayrunkel. I do have a Facebook account and stuff like that, but I don't really pay too much attention to it. Michael Lynn (35:25): Okay, great. Well, Jay, it's been a great conversation. Thanks so much for sharing your knowledge around sizing MongoDB. Nic, anything else before we go? Nic Raboy (35:33): No, that's it. This was fantastic, Jay. Jay Runkel (35:36): I really appreciate you guys having me on. Michael Lynn (35:38): Likewise. Have a great day. Jay Runkel (35:40): All right. Thanks a lot. Speaker 2 (35:43): Thanks for listening. If you enjoyed this episode, please like and subscribe. Have a question or a suggestion for the show? Visit us in the MongoDB Community Forums at https://www.mongodb.com/community/forums/. ### Summary Determining the correct amount of server resource for your databases involves an understanding of the types, amount, and read/write patterns of the data. There's no magic formula that works in every case. Thanks to Jay for helping us explore the process. Jay put together a presentation from MongoDB World 2017 that is still very applicable.
md
{ "tags": [ "MongoDB" ], "pageDescription": "Hardware Sizing MongoDB with Jay Runkel", "contentType": "Podcast" }
Hardware Sizing for MongoDB with Jay Runkel
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/javascript/gatsby-modern-blog
created
# Build a Modern Blog with Gatsby and MongoDB The web, like many other industries, works in a very cyclical way. Trends are constantly born and reborn. One of my favorite trends that's making a huge come back is static websites and focus on website performance. GatsbyJS presents a new way of building websites that mergers the static with the dynamic that in my opinion provides a worthwhile framework for your consideration. In today's tutorial, we're going to take a look at how we can leverage GatsbyJS and MongoDB to build a modern blog that can be served anywhere. We'll dive into how GraphQL makes it easy to visualize and work with our content regardless of where it's coming from. Get the code from this GitHub repo to follow along. ## Prerequisites For this tutorial you'll need: - Node.js - npm - MongoDB You can download Node.js here, and it will come with the latest version of npm. For MongoDB, you can use an existing install or MongoDB Atlas for free. The dataset we'll be working with comes from Hakan Özler, and can be found in this GitHub repo. All other required items will be covered in the article. ## What We're Building: A Modern Book Review Blog The app that we are building today is called Books Plus. It is a blog that reviews technical books. ## Getting Started with GatsbyJS GatsbyJS is a React based framework for building highly performant websites and applications. The framework allows developers to utilize the modern JavaScript landscape to quickly build static websites. What makes GatsbyJS really stand out is the ecosystem built around it. Plugins for all sorts of features and functionality easily interoperate to provide a powerful toolkit for anything you want your website to do. The second key feature of GatsbyJS is it's approach to data sources. While most static website generators simply process Markdown files into HTML, GatsbyJS provides a flexible mechanism for working with data from any source. In our article today, we'll utilize this functionality to show how we can have data both in Markdown files as well as in a MongoDB database, and GatsbyJS will handle it all the same. ## Setting Up Our Application To create a GatsbyJS site, we'll need to install the Gatsby CLI. In your Terminal window run `npm install -g gatsby-cli`. To confirm that the CLI is properly installed run `gatsby -help` in your Terminal. You'll see a list of available commands such as **gatsby build** and **gatsby new**. If you see information similar to the screenshot above, you are good to go. The next step will be to create a new GatsbyJS website. There's a couple of different ways we can do this. We can start with a barebones GatsbyJS app or a starter app that has various plugins already installed. To keep things simple we'll opt for the former. To create a new barebones GatsbyJS website run the following command: ``` bash gatsby new booksplus ``` Executing this command in your Terminal will create a new barebones GatsbyJS application in a directory called `booksplus`. Once the installation is complete, navigate to this new directory by running `cd booksplus` and once in this directory let's start up our local GatsbyJS development server. To do this we'll run the following command in our Terminal window. ``` bash gatsby develop ``` This command will take a couple of seconds to execute, but once it has, you'll be able to navigate to `localhost:8080` to see the default GatsbyJS starter page. The default page is not very impressive, but seeing it tells us that we are on the right path. You can also click the **Go to page 2** hyperlink to see how Gatsby handles navigation. ## GatsbyJS Secret Sauce: GraphQL If you were paying attention to your Terminal window while GatsbyJS was building and starting up the development server you may have also noticed a message saying that you can navigate to `localhost:8000/___graphql` to explore your sites data and schema. Good eye! If you haven't, that's ok, let's navigate to this page as well and make sure that it loads and works correctly. GraphiQL is a powerful user interface for working with GraphQL schemas, which is what GatsbyJS generates for us when we run `gatsby develop`. All of our websites content, including pages, images, components, and so on become queryable. This API is automatically generated by Gatsby's build system, we just have to learn how to use it to our advantage. If we look at the **Explorer** tab in the GraphiQL interface, we'll see the main queries for our API. Let's run a simple query to see what our current content looks like. The query we'll run is: ``` javascript query MyQuery { allSitePage { totalCount } } ``` Running this query will return the total number of pages our website currently has which is 5. We can add on to this query to return the path of all the pages. This query will look like the following: ``` javascript query MyQuery { allSitePage { totalCount nodes { path } } } ``` And the result: The great thing about GraphQL and GraphiQL is that it's really easy to build powerful queries. You can use the explorer to see what fields you can get back. Covering all the ins and outs of GraphQL is out of the scope of this article, but if you are interested in learning more about GraphQL check out this crash course that will get you writing pro queries in no time. Now that we have our app set up, let's get to building our application. ## Adding Content To Our Blog A blog isn't very useful without content. Our blog reviews books. So the first thing we'll do is get some books to review. New books are constantly being released, so I don't think it would be wise to try and keep track of our books within our GatsbyJS site. A database like MongoDB on the other hand makes sense. Hakan Özler has a curated list of datasets for MongoDB and one of them just happens to be a list of 400+ books. Let's use this dataset. I will import the dataset into my database that resides on MongoDB Atlas. If you don't already have MongoDB installed, you can get a free account on MongoDB Atlas. In my MongoDB Atlas cluster, I will create a new database and call it `gatsby`. In this new database, I will create a collection called `books`. There are many different ways to import data into your MongoDB database, but since I'm using MongoDB Atlas, I'll just import it directly via the web user interface. Our sample dataset contains 431 books, so after the import we should see 431 documents in the books collection. ## Connecting MongoDB and GatsbyJS Now that we have our data, let's use it in our GatsbyJS application. To use MongoDB as a data source for our app, we'll need to install the `gatsby-source-mongodb` plug in. Do so by running ``` bash npm install --save gatsby-source-mongodb ``` in your Terminal window. With the plugin installed, the next step will be to configure it. Open up the `gatsby-config.js` file. This file contains our site metadata as well as plugin configuration options. In the `plugins` array, let's add the `gatsby-source-mongodb` plugin. It will look something like this: ``` javascript { // The name of the plugin resolve: 'gatsby-source-mongodb', options: { // Name of the database and collection where are books reside dbName: 'gatsby', collection: 'books', server: { address: 'main-shard-00-01-zxsxp.mongodb.net', port: 27017 }, auth: { user: 'ado', password: 'password' }, extraParams: { replicaSet: 'Main-shard-0', ssl: true, authSource: 'admin', retryWrites: true } } }, ``` Save the file. If your `dbName` and `collection` are different from the above, take note of them as the naming here is very important and will determine how you interact with the GraphQL API. If your GatsbyJS website is still running, stop it, run `gatsby clean` and then `gatsby develop` to restart the server. The `gatsby clean` command will clear the cache and delete the previous version. In my experience, it is recommended to run this as otherwise you may run into issues with the server restarting correctly. When the `gatsby develop` command has successfully been re-run, navigate to the GraphiQL UI and you should see two new queries available: `mongodbGatsbyBooks` and `allMongodbGatsbyBooks`. Please note that if you named your database and collection something different, then these query names will be different. The convention they will follow though will be `mongodb` and `allMongodb`. Let's play with one of these queries and see what data we have access to. Execute the following query in GraphiQL: ``` javascript query MyQuery { allMongodbGatsbyBooks { edges { node { title } } } } ``` Your result will look something like this: Excellent. Our plugin was configured successfully and we see our collection data in our GatsbyJS website. We can add on to this query by requesting additional parameters like the authors, categories, description and so on, but rather than do that here, why don't we render it in our website. ## Displaying Book Data On the Homepage We want to display the book catalog on our homepage. Let's open up the `index.js` page located in the `src/pages` directory. This React component represent our homepage. Let's clean it up a bit before we start adding additional styles. Our new barebones component will look like this: ``` javascript import React from "react" import { Link } from "gatsby" import Layout from "../components/layout" const IndexPage = () => ( ) export default IndexPage ``` Next let's add a GraphQL query to get our books data into this page. The updated code will look like this: ``` javascript import React from "react" import { Link } from "gatsby" import { graphql } from "gatsby" import Layout from "../components/layout" const IndexPage = () => ( ) export default IndexPage export const pageQuery = graphql` query { allMongodbGatsbyBooks { edges { node { id title shortDescription thumbnailUrl } } } } ` ``` We are making a call to the `allMongodbGatsbyBooks` query and asking for all the books in the collection. For each book we want to get its id, title, shortDescription and thumbnailUrl. Finally, to get this data into our component, we'll pass it through props: ``` javascript import React from "react" import { Link } from "gatsby" import { graphql } from "gatsby" import Layout from "../components/layout" const IndexPage = (props) => { const books = props.data.allMongodbGatsbyBooks.edges; return ( ) } export default IndexPage export const pageQuery = graphql` query { allMongodbGatsbyBooks { edges { node { id title shortDescription thumbnailUrl } } } } ` ``` Now we can render our books to the page. We'll do so by iterating over the books array and displaying all of the information we requested. The code will look like this: ``` javascript return ( {books.map(book => {BOOK.NODE.TITLE} {book.node.shortDescription} )} ) ``` Let's go to `localhost:8000` and see what our website looks like now. It should look something like: If you start scrolling you'll notice that all 400+ books were rendered on the page. All this data was cached so it will load very quickly. But if we click on any of the links, we will get a 404. That's not good, but there is a good reason for it. We haven't created an individual view for the books. We'll do that shortly. The other issue you might have noticed is that we added the classes `book-container` and `book` but they don't seem to have applied any sort of styling. Let's fix that issue first. Open up the `layout.css` file located in the `src/components` directory and add the following styles to the bottom of the page: ``` javascript .book-container { display: flex; flex-direction: row; flex-wrap: wrap } .book { width: 25%; flex-grow: 1; text-align: center; } .book img { width: 50%; } ``` Next, let's simplify our UI by just displaying the cover of the book. If a user wants to learn more about it, they can click into it. Update the `index.js` return to the following: ``` javascript const IndexPage = (props) => { const books = props.data.allMongodbGatsbyBooks.edges; return ( {books.map(book => {book.node.thumbnailUrl && } )} ) } ``` While we're at it, let's change the name of our site in the header to Books Plus by editing the `gatsby-config.js` file. Update the `siteMetadata.title` property to **Books Plus**. Our updated UI will look like this: ## Creating the Books Info Page As mentioned earlier, if you click on any of the book covers you will be taken to a 404 page. GatsbyJS gives us multiple ways to tackle how we want to create this page. We can get this content dynamically, but I think pre-rendering all of this pages at build time will give our users a much better experience, so we'll do that. The first thing we'll need to do is create the UI for what our single book view page is going to look like. Create a new file in the components directory and call it `book.js`. The code for this file will look like this: ``` javascript import React from "react" import { graphql } from "gatsby" import Layout from "./layout" class Item extends React.Component { render() { const book = this.props.data.mongodbGatsbyBooks return ( {BOOK.TITLE} By {book.authors.map(author => ( {author}, ))} {book.longDescription} Published: {book.publishedDate} | ISBN: {book.isbn} {book.categories.map(category => category)} ) } } export default Item export const pageQuery = graphql` query($id: String!) { mongodbGatsbyBooks(id: { eq: $id }) { id title longDescription thumbnailUrl isbn pageCount publishedDate(formatString: "MMMM DD, YYYY") authors categories } } ` ``` To break down what is going on in this component, we are making use of the `mongodbGatsbyBooks` query which returns information requested on a single book based on the `id` provided. That'll do it for our component implementation. Now let's get to the fun part. Essentially what we want to happen when we start up our Gatsby server is to go and get all the book information from our MongoDB database and create a local page for each document. To do this, let's open up the `gatsby-node.js` file. Add the following code and I'll explain it below: ``` javascript const path = require('path') exports.createPages = async ({ graphql, actions }) => { const { createPage } = actions const { data } = await graphql(` { books: allMongodbGatsbyBooks { edges { node { id } } } } `) const pageTemplate = path.resolve('./src/components/book.js') for (const { node } of data.books.edges) { createPage({ path: '/book/${node.id}/', component: pageTemplate, context: { id: node.id, }, }) } } ``` The above code will do the heavy lifting of going through our list of 400+ books and creating a static page for each one. It does this by utilizing the Gatsby `createPages` API. We supply the pages we want, alongside the React component to use, as well as the path and context for each, and GatsbyJS does the rest. Let's save this file, run `gatsby clean` and `gatsby develop`, and navigate to `localhost:8000`. Now when the page loads, you should be able to click on any of the books and instead of seeing a 404, you'll see the details of the book rendered at the `/book/{id}` url. So far so good! ## Writing Book Reviews with Markdown We've shown how we can use MongoDB as a data source for our books. The next step will be to allow us to write reviews on these books and to accomplish that we'll use a different data source: trusty old Markdown. In the `src` directory, create a new directory and call it `content`. In this directory, let's create our first post called `welcome.md`. Open up the new `welcome.md` file and paste the following markdown: ``` --- title: Welcome to Books Plus author: Ado Kukic slug: welcome --- Welcome to BooksPlus, your trusted source of tech book reviews! ``` Save this file. To use Markdown files as our source of content, we'll have to add another plugin. This plugin will be used to transform our `.md` files into digestible content for our GraphQL API as well as ultimately our frontend. This plugin is called `gatsby-transformer-remark` and you can install it by running `npm install --save gatsby-transformer-remark`. We'll have to configure this plugin in our `gatsby-config.js` file. Open it up and make the following changes: ``` javascript { resolve: 'gatsby-source-filesystem', options: { name: 'content', path: `${__dirname}/src/content/`, }, }, 'gatsby-transformer-remark', ``` The `gatsby-source-filesystem` plugin is already installed, and we'll overwrite it to just focus on our markdown files. Below it we'll add our new plugin to transform our Markdown into a format our GraphQL API can work with. While we're at it we can also remove the `image.js` and `seo.js` starter components as we will not be using them in our application. Let's restart our Gatsby server and navigate to the GraphiQL UI. We'll see two new queries added: `allMarkdownRemark` and `markdownRemark`. These queries will allow us to query our markdown content. Let's execute the following query: ``` javascript query MyQuery { allMarkdownRemark { edges { node { frontmatter { title author } html } } } } ``` Our result should look something like the screenshot below, and will look exactly like the markdown file we created earlier. ## Rendering Our Blog Content Now that we can query our markdown content, we can just as pre-generate the markdown pages for our blog. Let's do that next. The first thing we'll need is a template for our blog. To create it, create a new file called `blog.js` located in the `src/components` directory. My code will look like this: ``` javascript import React from "react" import { graphql } from "gatsby" import Layout from "./layout" class Blog extends React.Component { render() { const post = this.props.data.markdownRemark return ( {POST.FRONTMATTER.TITLE} BY {POST.FRONTMATTER.AUTHOR} ) } } export default Blog export const pageQuery = graphql` query($id: String!) { markdownRemark(frontmatter : {slug: { eq: $id }}) { frontmatter { title author } html } } ` ``` Next we'll need to tell Gatsby to build our markdown pages at build time. We'll open the `gatsby-node.js` file and make the following changes: ``` javascript const path = require('path') exports.createPages = async ({ graphql, actions }) => { const { createPage } = actions const { data } = await graphql(` { books: allMongodbGatsbyBooks { edges { node { id } } } posts: allMarkdownRemark { edges { node { frontmatter { slug } } } } } `) const blogTemplate = path.resolve('./src/components/blog.js') const pageTemplate = path.resolve('./src/components/book.js') for (const { node } of data.posts.edges) { createPage({ path: `/blog/${node.frontmatter.slug}/`, component: blogTemplate, context: { id: node.frontmatter.slug }, }) } for (const { node } of data.books.edges) { createPage({ path: `/book/${node.id}/`, component: pageTemplate, context: { id: node.id, }, }) } } ``` The changes we made above will not only generate a different page for each book, but will now generate a unique page for every markdown file. Instead of using a randomly generate id for the content page, we'll use the user-defined slug in the frontmatter. Let's restart our Gatsby server and navigate to `localhost:8000/blog/welcome` to see our changes in action. ## Displaying Posts on the Homepage We want our users to be able to read our content and reviews. Currently you can navigate to `/blog/welcome` to see the post, but it'd be nice to display our latest blog posts on the homepage as well. To do this we'll, make a couple of updates on our `index.js` file. We'll make the following changes: ``` javascript import React from "react" import { Link } from "gatsby" import { graphql } from "gatsby" import Layout from "../components/layout" const IndexPage = (props) => { const books = props.data.books.edges; const posts = props.data.posts.edges; return ( {posts.map(post => {POST.NODE.FRONTMATTER.TITLE} By {post.node.frontmatter.author} )} {books.map(book => {book.node.thumbnailUrl && } )} ) } export default IndexPage export const pageQuery = graphql` query { posts: allMarkdownRemark { edges { node { frontmatter { title slug author } } } } books: allMongodbGatsbyBooks { edges { node { id title shortDescription thumbnailUrl } } } } ` ``` We've updated our GraphQL query to get us not only the list of books, but also all of our posts. We named these queries `books` and `posts` accordingly so that it's easier to work with them in our template. Finally we updated the template to render the new UI. If you navigate to `localhost:8000` now you should see your latest post at the top like this: And of course, you can click it to view the single blog post. ## Combining Mongo and Markdown Data Sources The final thing I would like to do in our blog today is the ability to reference a book from MongoDB in our review. This way when a user reads a review, they can easily click through and see the book information. To get started with this, we'll need to update our `gatsby-node.js` file to allow us to query a specific book provided in the frontmatter of a post. We'll update the `allMarkdownRemark` so that in addition to getting the slug, we'll get the book parameter. The query will look like this: ``` javascript allMarkdownRemark { edges { node { frontmatter { slug book } } } } ... } ``` Additionally, we'll need to update our `createPage()` method when generating the blog pages, to pass along the book information in the context. ``` javascript createPage({ path: `/blog/${node.frontmatter.slug}/`, component: blogTemplate, context: { id: node.frontmatter.slug, book: node.frontmatter.book }, }) } ``` We'll be able to use anything passed in this `context` property in our GraphQL queries in our blog component. Next, we'll update our blog component to account for the new query. This query will be the MongoDB based book query. It will look like so: ``` javascript export const pageQuery = graphql` query($id: String!, $book: String) { post: markdownRemark(frontmatter : {slug: { eq: $id }}) { id frontmatter { title author } html } book: mongodbGatsbyBooks(id: { eq: $book }) { id thumbnailUrl } } ` ``` Notice that the `$book` parameter is optional. This means that a post could be associated with a specific book, but it doesn't have to be. We'll update our UI to display the book information if a book is provided. ``` javascript class Blog extends React.Component { render() { const post = this.props.data.post const book = this.props.data.book return ( {POST.FRONTMATTER.TITLE} BY {POST.FRONTMATTER.AUTHOR} {book && } ) } } ``` If we look at our original post, it doesn't have a book associated with it, so that specific post shouldn't look any different. But let's write a new piece of content, that does contain a review of a specific book. Create a new markdown file called `mongodb-in-action-review.md`. We'll add the following review: ``` javascript --- title: MongoDB In Action Book Review author: Ado Kukic slug: mdb-in-action-review book: 30e4050a-da76-5c08-a52c-725b4410e69b --- MongoDB in Action is an essential read for anybody wishing to learn the ins and outs of MongoDB. Although the book has been out for quite some time, it still has a lot of valuable information and is a great start to learning MongoDB. ``` Restart your Gatsby server so that the new content can be generated. On your homepage, you'll now see two blog posts, the original **Welcome** post as well as the new **MongoDB In Action Review** post. Clicking the **MongoDB In Action Review** link will take you to a blog page that contains the review we wrote a few seconds ago. But now, you'll also see the thumbnail of the book. Clicking this thumbnail will lead you to the books page where you can learn more about the book. ## Putting It All Together In this tutorial, I showed you how to build a modern blog with GatsbyJS. We used multiple data sources, including a remote MongoDB Atlas database and local markdown files, to generate a static blog. We took a brief tour of GraphQL and how it enhances our development experience by consolidating all of our data sources into a single API that we can query both at build and run time. I hope you learned something new, if you have any questions feel free to ask in our MongoDB community forums. >If you want to get the code for this tutorial, you can clone it from this GitHub repo. The sample books dataset can also be found here. Try MongoDB Atlas to make it easy to manage and scale your MongoDB database. Happy coding!
md
{ "tags": [ "JavaScript", "MongoDB" ], "pageDescription": "Learn how to build a modern blog with GatsbyJS, MongoDB, and Markdown.", "contentType": "Tutorial" }
Build a Modern Blog with Gatsby and MongoDB
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/introduction-to-modern-databases-mongodb-academia
created
# MongoDB Academia - Introduction to Modern Databases ## Introduction As part of the MongoDB for Academia program, we are pleased to announce the publication of a new course, Introduction to Modern Databases, and related teaching materials. The course materials are designed for the use of educators teaching MongoDB in universities, colleges, technical bootcamps, or other learning programs. In this article, we describe why we've created this course, its structure and content, and how educators can use this material to support hands-on learning with the MongoDB Web Shell. ## Table of Contents - Course Format - Why Create This Course? - Course Outline - Course Lessons - What is in a Lesson - Using the MongoDB Web Shell - What is MongoDB for Academia? - Course Materials and Getting Involved in the MongoDB for Academia Program ## Course Format Introduction to Modern Databases has been designed to cover the A-Z of MongoDB for educators. The course consists of 22 lessons in slide format. Educators are welcome to teach the entire course or select individual lessons and/or slides as needed. Quiz questions with explained answers and instructions for hands-on exercises are included on slides interspersed throughout. The hands-on activities use the browser-based MongoDB Web Shell, an environment that runs on servers hosted by MongoDB. This means the only technical requirement for these activities is Internet access and a web browser. The materials are freely available for non-commercial use and are licensed under Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. ## Why Create This Course? We created this course in response to requests from the educational community for support bridging the gap in teaching materials for MongoDB. We received many requests from the academic community for teaching materials on document databases, MongoDB's features, and how to model schemas using the document model. We hope this material will be a valuable resource for educators teaching with MongoDB in a variety of contexts, and their learners. ## Course Outline The course compares and contrasts relational and non-relational databases, outlines the architecture of MongoDB, and details how to model data in MongoDB. The included quizzes and hands-on exercises support active learning and retention of key concepts and skills. This material can support a wide variety of instructional objectives, including learning best practices for querying data and structuring data models in MongoDB, and using features like transactions and aggregations. ## Course Lessons The course consists of 22 lessons across a wide range of MongoDB topics. The lessons can be taught individually or as part of a wider selection of lessons from the course. The lessons are as follows: - What is a modern general purpose database? - SQL and MQL - Non-relational databases - Querying in SQL and in MQL - When to use SQL and when to use MQL - Documents and MongoDB - MongoDB is a data platform - MongoDB architecture - MongoDB Atlas - The MongoDB Query Language (MQL) - Querying complex data with MQL - Querying data with operators and compound conditions - Inserting and updating data in MongoDB - Deleting data in MongoDB - The MongoDB aggregation framework - Querying data in MongoDB with the aggregation framework - Data modeling and schema design patterns - Sharding in MongoDB - Indexing in MongoDB - Transactions in MongoDB - Change streams in MongoDB - Drivers, connectors, and the wider ecosystem ## What is in a Lesson Each lesson covers the specified topic and includes a number of quizzes designed to assess the material presented. Several lessons provide hands-on examples suitable for students to follow themselves or for the educator to present in a live-coding fashion to the class. This provides a command line interface similar to the Mongo Shell but which you interact with through a standard web browser. ## Using the MongoDB Web Shell The MongoDB Web Shell is ideal for use in the hands-on exercise portions of Introduction to Modern Databases or anytime a web browser-accessible MongoDB environment is needed. The MongoDB Web Shell provides a command line interface similar to the Mongo Shell but which you interact with through a standard web browser. Let us walk through a small exercise using the MongoDB Web Shell: - First, open another tab in your web browser and navigate to the MongoDB Web Shell. - Now for our exercise, let's create a collection for cow documents and insert 10 new cow documents into the collection. We will include a name field and a field with a varying value of 'milk'. ``` javascript for(c=0;c<10;c++) { db.cows.insertOne( { name: "daisy", milk: c } ) } ``` - Let's now use the follow query in the same tab with the MongoDB Web Shell to find all the cow documents where the value for milk is greater than eight. ``` javascript db.cows.find( { milk: { $gt: 8 } } ) ``` - The output in the MongoDB Web Shell will be similar to the following but with a different ObjectId. ``` json { "_id": ObjectId(5f2aefa8fde88235b959f0b1e), "name" : "daisy", "milk" : 9 } ``` - Then let's show that we can perform another CRUD operation, update, and let's change the name of the cow to 'rose' and change the value of milk to 10 for that cow. ``` javascript db.cows.updateOne( { milk: 9 }, { $set: { name: "rose" }, $inc: { milk: 1 } } ) ``` - We can query on the name of the cow to see the results of the update operation. ``` javascript db.cows.find( { name: "rose" } ) ``` This example gives only a small taste of what you can do with the MongoDB Web Shell. ## What is MongoDB for Academia? MongoDB for Academia is our program to support educators and students. The program offers educational content, resources, and community for teaching and learning MongoDB, whether in colleges and universities, technical bootcamps, online learning courses, high schools, or other educational programs. For more information on MongoDB for Academia's free resources and support for educators and students, visit the MongoDB for Academia website. ## Course Materials and Getting Involved in the MongoDB for Academia Program All of the materials for Introduction to Modern Databases can be downloaded here. If you also want to get involved and learn more about the MongoDB Academia program, you can join the email list at and join our community forums.
md
{ "tags": [ "MongoDB" ], "pageDescription": "Introduction to Modern Databases, a new free course with materials and resources for educators.", "contentType": "News & Announcements" }
MongoDB Academia - Introduction to Modern Databases
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/top-4-reasons-to-use-mongodb
created
# The Top 4 Reasons Why You Should Use MongoDB Welcome (or welcome back!) to the SQL to MongoDB series. In the first post in this series, I mapped terms and concepts from SQL to MongoDB. I also introduced you to Ron. Let's take a moment and return to Ron. Ron is pretty set in his ways. For example, he loves his typewriter. It doesn't matter that computers are a bajillion times more powerful than typewriters. Until someone convinces him otherwise, he's sticking with his typewriter. Maybe you don't have a love for typewriters. But perhaps you have a love for SQL databases. You've been using them for years, you've learned how to make them work well enough for you, and you know that learning MongoDB will require you to change your mindset. Is it really worth the effort? Yes! In this post, we'll examine the top four reasons why you should use MongoDB: * Scale Cheaper * Query Faster * Pivot Easier * Program Faster > This article is based on a presentation I gave at MongoDB World and MongoDB.local Houston entitled "From SQL to NoSQL: Changing Your Mindset." > > If you prefer videos over articles, check out the recording. Slides are available here. ## Scale Cheaper You can scale cheaper with MongoDB. Why? Let's begin by talking about scaling SQL databases. Typically, SQL databases scale vertically-when a database becomes too big for its server, it is migrated to a larger server. Vertical scaling by migrating to larger servers A few key problems arise with vertical scaling: * Large servers tend to be more expensive than two smaller servers with the same total capacity. * Large servers may not be available due to cost limitations, cloud provider limitations, or technology limitations (a server the size you need may not exist). * Migrating to a larger server may require application downtime. When you use MongoDB, you have the flexibility to scale horizontally through sharding. Sharding is a method for distributing data across multiple servers. When your database exceeds the capacity of its current server, you can begin sharding and split it over two servers. As your database continues to grow, you can continue to add more servers. The advantage is that these new servers don't need to be big, expensive machines-they can be cheaper, commodity hardware. Plus, no downtime is required. Horizonal scaling by adding more commodity servers ## Query Faster Your queries will typically be faster with MongoDB. Let's examine why. Even in our simple example in the previous post where we modeled Leslie's data in SQL, we saw that her information was spread across three tables. Whenever we want to query for Leslie's information, we'll need to join three tables together. In these three small tables, the join will be very fast. However, as the tables grow and our queries become more complex, joining tables together becomes very expensive. Recall our rule of thumb when modeling data in MongoDB: *data that is accessed together should be stored together*. When you follow this rule of thumb, most queries will not require you to join any data together. Continuing with our earlier example, if we want to retrieve Leslie's information from MongoDB, we can simply query for a single document in the `Users` collection. As a result, our query will be very fast. As our documents and collections grow larger, we don't have to worry about our queries slowing down as long as we are using indexes and continue following our rule of thumb: *data that is accessed together should be stored together*. ## Pivot Easier Requirements change. Sometimes the changes are simple and require only a few tweaks to the user interface. But sometimes changes go all the way down to the database. In the previous post in this series, we discovered—after implementing our application—that we needed to store information about Lauren's school. Let's take a look at this example a little more closely. To add a new `school` column in our SQL database, we're going to have to alter the `Users` table. Executing the `Alter Table` command could take a couple of hours depending on how much data is in the table. The performance of our application could be decreased while the table is being altered, and we may need to schedule downtime for our application. Now let's examine how we can do something similar in MongoDB. When our requirements change and we need to begin storing the name of a user's school in a `User` document, we can simply begin doing so. We can choose if and when to update existing documents in the collection. If we had implemented schema validation, we would have the option of applying the validation to all inserts and updates or only to inserts and updates to documents that already meet the schema requirements. We would also have the choice of throwing an error or a warning if a validation rule is violated. With MongoDB, you can easily change the shape of your data as your app evolves. ## Program Faster To be honest with you, this advantage is one of the biggest surprises to me. I figured that it didn't matter what you used as your backend database—the code that interacts with it would be basically the same. I was wrong. MFW I realized how much easier it is to code with MongoDB. MongoDB documents map to data structures in most popular programming languages. This sounds like such a simple thing, but it makes a *humongous* difference when you're writing code. A friend encouraged me to test this out, so I did. I implemented the code to retrieve and update user profile information. My code has some simplifications in it to enable me to focus on the interactions with the database rather than the user interface. I also limited the user profile information to just contact information and hobbies. Below is a comparison of my implementation using MySQL and MongoDB. I wrote the code in Python, but, don't worry if you're not familiar with Python, I'll walk you through it step by step. The concepts will be applicable no matter what your programming language of choice is. ### Connect to the Databases Let's begin with the typical top-of-the-file stuff. We'll import what we need, connect to the database, and declare our variables. I'm going to simplify things by hardcoding the User ID of the user whose profile we will be retrieving rather than pulling it dynamically from the frontend code. MySQL ``` python import mysql.connector # CONNECT TO THE DB mydb = mysql.connector.connect( host="localhost", user="root", passwd="rootroot", database="CityHall" ) mycursor = mydb.cursor(dictionary=True) # THE ID OF THE USER WHOSE PROFILE WE WILL BE RETRIEVING AND UPDATING userId = 1 ``` We'll pass the dictionary=True option when we create the cursor so that each row will be returned as a dictionary. MongoDB ``` python import pymongo from pymongo import MongoClient # CONNECT TO THE DB client = MongoClient() client = pymongo.MongoClient("mongodb+srv://root:[email protected]/test?retryWrites=true&w=majority") db = client.CityHall # THE ID OF THE USER WHOSE PROFILE WE WILL BE RETRIEVING AND UPDATING userId = 1 ``` So far, the code is pretty much the same. ### Get the User's Profile Information Now that we have our database connections ready, let's use them to retrieve our user profile information. We'll store the profile information in a Python Dictionary. Dictionaries are a common data structure in Python and provide an easy way to work with your data. Let's begin by implementing the code for MySQL. Since the user profile information is spread across the `Users` table and the `Hobbies` table, we'll need to join them in our query. We can use prepared statements to ensure our data stays safe. MySQL ``` python sql = "SELECT * FROM Users LEFT JOIN Hobbies ON Users.ID = Hobbies.user_id WHERE Users.id=%s" values = (userId,) my cursor.execute(sql, values) user = mycursor.fetchone() ``` When we execute the query, a result is returned for every user/hobby combination. When we call `fetchone()`, we get a dictionary like the following: ``` python {u'city': u'Pawnee', u'first_name': u'Leslie', u'last_name': u'Yepp', u'user_id': 1, u'school': None, u'longitude': -86.5366, u'cell': u'8125552344', u'latitude': 39.1703, u'hobby': u'scrapbooking', u'ID': 10} ``` Because we joined the `Users` and the `Hobbies` tables, we have a result for each hobby this user has. To retrieve all of the hobbies, we need to iterate the cursor. We'll append each hobby to a new `hobbies` array and then add the `hobbies` array to our `user` dictionary. MySQL ``` python hobbies = ] if (user["hobby"]): hobbies.append(user["hobby"]) del user["hobby"] del user["ID"] for result in mycursor: hobbies.append(result["hobby"]) user["hobbies"] = hobbies ``` Now let's implement that same functionality for MongoDB. Since we stored all of the user profile information in the `User` document, we don't need to do any joins. We can simply retrieve a single document in our collection. Here is where the big advantage that *MongoDB documents map to data structures in most popular programming languages* comes in. I don't have to do any work to get my data into an easy-to-work-with Python Dictionary. MongoDB gives me all of the results in a Python Dictionary automatically. MongoDB ``` python user = db['Users'].find_one({"_id": userId}) ``` And that's it—we're done. What took us 12 lines for MySQL, we were able to implement in 1 line for MongoDB. Our `user` dictionaries are now pretty similar in both pieces of code. MySQL ``` json { 'city': 'Pawnee', 'first_name': 'Leslie', 'last_name': 'Yepp', 'school': None, 'cell': '8125552344', 'latitude': 39.1703, 'longitude': -86.5366,3 'hobbies': ['scrapbooking', 'eating waffles', 'working'], 'user_id': 1 } ``` MongoDB ``` json { 'city': 'Pawnee', 'first_name': 'Leslie', 'last_name': 'Yepp', 'cell': '8125552344', 'location': [-86.536632, 39.170344], 'hobbies': ['scrapbooking', 'eating waffles', 'working'], '_id': 1 } ``` Now that we have retrieved the user's profile information, we'd likely send that information up the stack to the frontend UI code. ### Update the User's Profile Information When Leslie views her profile information in our application, she may discover she needs to update her profile information. The frontend UI code would send that updated information in a Python dictionary to the Python files we've been writing. To simulate Leslie updating her profile information, we'll manually update the Python dictionary ourselves for both MySQL and MongoDB. MySQL ``` python user.update( { "city": "Washington, DC", "latitude": 38.897760, "longitude": -77.036809, "hobbies": ["scrapbooking", "eating waffles", "signing bills"] } ) ``` MongoDB ``` python user.update( { "city": "Washington, DC", "location": [-77.036809, 38.897760], "hobbies": ["scrapbooking", "eating waffles", "signing bills"] } ) ``` Now that our `user` dictionary is updated, let's push the updated information to our databases. Let's begin with MySQL. First, we need to update the information that is stored in the `Users` table. MySQL ``` python sql = "UPDATE Users SET first_name=%s, last_name=%s, cell=%s, city=%s, latitude=%s, longitude=%s, school=%s WHERE (ID=%s)" values = (user["first_name"], user["last_name"], user["cell"], user["city"], user["latitude"], user["longitude"], user["school"], userId) mycursor.execute(sql, values) mydb.commit() ``` Second, we need to update our hobbies. For simplicity, we'll delete any existing hobbies in the `Hobbies` table for this user and then we'll insert the new hobbies into the `Hobbies` table. MySQL ``` python sql = "DELETE FROM Hobbies WHERE user_id=%s" values = (userId,) mycursor.execute(sql, values) mydb.commit() if(len(user["hobbies"]) > 0): sql = "INSERT INTO Hobbies (user_id, hobby) VALUES (%s, %s)" values = [] for hobby in user["hobbies"]: values.append((userId, hobby)) mycursor.executemany(sql,values) mydb.commit() ``` Now let's update the user profile information in MongoDB. Since the user's profile information is stored in a single document, we only have to do a single update. Once again we will benefit from MongoDB documents mapping to data structures in most popular programming languages. We can send our `user` Python dictionary when we call `update_one()`, which significantly simplifies our code. MongoDB ``` python result = db['Users'].update_one({"_id": userId}, {"$set": user}) ``` What took us 15 lines for MySQL, we were able to implement in 1 line for MongoDB. ### Summary of Programming Faster In this example, we wrote 27 lines of code to interact with our data in MySQL and 2 lines of code to interact with our data in MongoDB. While fewer lines of code is not always indicative of better code, in this case, we can probably agree that fewer lines of code will likely lead to easier maintenance and fewer bugs. The examples above were relatively simple with small queries. Imagine how much bigger the difference would be for larger, more complex queries. MongoDB documents mapping to data structures in most popular programming languages can be a huge advantage in terms of time to write, debug, and maintain code. The code above was written in Python and leveraged the Python MongoDB Driver. For a complete list of all of the programming languages that have MongoDB drivers, visit the [MongoDB Manual. If you'd like to grab a copy of the code in the examples above, visit my GitHub repo. ## Wrap Up In this post, we discussed the top four reasons why you should use MongoDB: * Scale Cheaper * Query Faster * Pivot Easier * Program Faster Be on the lookout for the final post in this series where I'll discuss the top three things you need to know as you move from SQL to MongoDB.
md
{ "tags": [ "MongoDB" ], "pageDescription": "Discover the top 4 reasons you should use MongoDB", "contentType": "Article" }
The Top 4 Reasons Why You Should Use MongoDB
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/javascript/internet-of-toilets
created
# An Introduction to IoT (Internet of Toilets) My favorite things in life are cats, computers, and crappy ideas, so I decided to combine all three and explore what was possible with JavaScript by creating a brand new Internet of Things (IoT) device for my feline friend at home. If you're reading this, you have probably heard about how hot internet-connected devices are, and you are probably interested in learning how to get into IoT development as a JavaScript developer. In this post, we will explore why you should consider JavaScript for your next IoT project, talk about IoT data best practices, and we will explore my latest creation, the IoT Kitty Litter Box. ## IoT And JS(?!?!) Okay, so why on earth should you use JavaScript on an IoT project? You might have thought JavaScript was just for webpages. Well, it turns out that JavaScript is famously eating the world and it is now, in fact, running on lots of new and exciting devices, including most internet-enabled IoT chips! Did you know that 58% of developers that identified as IoT developers use Node.js? ### The Internet *Already* Speaks JavaScript That's a lot of IoT developers already using Node.js. Many of these developers use Node because the internet *already* speaks JavaScript. It's natural to continue building internet-connected devices using the de facto standard of the internet. Why reinvent the wheel? ### Easy to Update Another reason IoT developers use Node is it's ease in updating your code base. With other programming languages commonly used for IoT projects (C or C++), if you want to update the code, you need to physically connect to the device, and reflash the device with the most up-to-date code. However, with an IoT device running Node, all you have to do is remotely run `git pull` and `npm install`. Now that's much easier. ### Node is Event-Based One of the major innovations of Node is the event loop. The event loop enables servers running Node to handle events from the outside world (i.e. requests from clients) very quickly. Node is able to handle these events extremely efficiently and at scale. Now, consider how an IoT device in the wild is built to run. In this thought experiment, let's imagine that we are designing an IoT device for a farm that will be collecting moisture sensor data from a cornfield. Our device will be equipped with a moisture sensor that will send a signal once the moisture level in the soil has dropped below a certain level. This means that our IoT device will be responding to a moisture *event* (sounds a lot like an *event loop* ;P). Nearly all IoT use cases are built around events like this. The fact that Node's event-based architecture nearly identically matches the event-based nature of IoT devices is a perfect fit. Having an event-based IoT architecture means that your device can save precious power when it does not need to respond to an event from the outside world. ### Mature IoT Community Lastly, it's important to note that there is a mature community of IoT developers actively working on IoT libraries for Node.js. My favorites are Johnny-Five and CylonJS. Let's take a look at the "Hello World" on IoT devices: making an LED bulb blink. Here's what it looks like when I first got my IoT "Hello World" code working. Just be careful that your cat doesn't try to eat your project while you are getting your Hello World app up and running. ## IoT (AKA: Internet of Toilets) Kitty Litter Box This leads me to my personal IoT project, the IoT Kitty Litter Box. For this project, I opted to use Johnny-Five. So, what the heck is this IoT Kitty Litter Box, and why would anyone want to build such a thing? Well, the goal of building an internet-connected litter box was to: - Help track my feline friend's health by passively measuring my cat's weight every time he sets foot in the litter tray. - Monitor my cat's bathroom patterns over time. It will make it easy to track any changes in bathroom behavior. - Explore IoT projects and have some fun! Also, personally, I like the thought of building something that teeters right on the border of being completely ridiculous and kinda genius. Frankly, I'm shocked that no one has really made a consumer product like this! Here it is in all of its completed glory. ### Materials and Tools - 1 x Raspberry Pi - I used a Raspberry Pi 3 Model B for this demo, but any model will do. - 1 x Breadboard - 2 x Female to male wires - 1 x 3D printer \Optional\] - The 3D printer was used for printing the case where the electronics are enclosed. - 1 x PLA filament \[Optional\] - Any color will work. - 1 x Solder iron and solder wire - 8 x M2x6 mm bolts - 1 x HX711 module - This module is required as a load cell amplifier and it converts the analog load cell signal to a digital signal so the Raspberry Pi can read the incoming data. - 4 x 50 kg load cell (x4) - They are used to measure the weight. In this project, four load cells are used and can measure a maximum weight of 200kg. - 1 x Magnetic door sensor - Used to detect that the litter box is opened. - 1 x Micro USB cable - 1 x Cat litter box ### How Does the IoT Kitty Litter Box Work? So how does this IoT Kitty Litter Box work? Let's take a look at the events that I needed to handle: - When the lid of the box is removed, the box enters "maintenance mode." When in maintenance mode, I can remove waste or refresh the litter. - When the lid of the box is put back on, it leaves maintenance mode, waits one minute for the litter to settle, then it recalibrates a new base weight after being cleaned. - The box then waits for a cat-sized object to be added to the weight of the box. When this event occurs, we wait 15 seconds for the cat to settle and the box records the weight of the cat and records it in a MongoDB database. - When the cat leaves the box, we reset the base weight of the box, and the box waits for another bathroom or maintenance event to occur. You can also check out this handy animation that walks through the various events that we must handle. ![Animation of how the box works ### How to Write Code That Interacts With the Outside World For this project, I opted to work with a Raspberry Pi 3 Model B+ since it runs a full Linux distro and it's easy to get Node running on it. The Raspberry Pi is larger than other internet-enabled chips on the market, but its ease of use makes it ideal for first-timers looking to dip into IoT projects. The other reason I picked the Raspberry Pi is the large array of GPIO pins. These pins allow you to do three things. 1. Power an external sensor or chip. 2. Read input from a sensor (i.e. read data from a light or moisture sensor). 3. Send data from the Raspberry Pi to the outside world (i.e. turning a light on and off). I wired up the IoT Kitty Litter Box using the schema below. I want to note that I am not an electrical engineer and creating this involved lots of Googling, failing, and at least two blown circuit boards. It's okay to make mistakes, especially when you are first starting out. ### Schematics We will be using these GPIO pins in order to communicate with our sensors out in the "real world." ## Let's Dig Into the Code I want to start with the most simple component on the box, the magnetic switch that is triggered when the lid is removed, so we know when the box goes into "maintenance mode." If you want to follow along, you can check out the complete source code here. ### Magnetic Switch ``` javascript const { RaspiIO } = require('raspi-io'); const five = require('johnny-five'); // Initialize a new Raspberry Pi Board const board = new five.Board({ io: new RaspiIO(), }); // Wait for the board to initialize then start reading in input from sensors board.on('ready', () => { // Initialize a new switch on the 16th GPIO Input pin const spdt = new five.Switch('GPIO16'); // Wait for the open event to get triggered by the sensor spdt.on('open', () => { enterMaintenceMode(); }); // Recalibrate the box once the sensor has closed // Once the box has been cleaned, the box prepares for a new event spdt.on('close', () => { console.log('close'); // When the box has been closed again // wait 1 min for the box to settle // and recalibrate a new base weight setTimeout(() => { scale.calibrate(); }, 60000); }); }); board.on('fail', error => { handleError(error); }); ``` You can see the event and asynchronous nature of IoT plays really nicely with Node's callback structure. Here's a demo of the magnetic switch component in action. ### Load Cells Okay, now let's talk about my favorite component, the load cells. The load cells work basically like any bathroom scale you may have at home. The load cells are responsible for converting the pressure placed on them into a digital weight measurement I can read on the Raspberry Pi. I start by taking the base weight of the litter box. Then, I wait for the weight of something that is approximately cat-sized to be added to the base weight of the box and take the cat's weight. Once the cat leaves the box, I then recalibrate the base weight of the box. I also recalibrate the base weight after every time the lid is taken off in order to account for events like the box being cleaned or having more litter added to the box. In regards to the code for reading data from the load cells, things were kind of tricky. This is because the load cells are not directly compatible with Johnny-Five. I was, however, able to find a Python library that can interact with the HX711 load cells. ``` python #! /usr/bin/python2 import time import sys import RPi.GPIO as GPIO from hx711 import HX711 # Infintely run a loop that checks the weight every 1/10 of a second while True: try: # Prints the weight - and send it to the parent Node process val = hx.get_weight() print(val) # Read the weight every 1/10 of a second time.sleep(0.1) except (KeyboardInterrupt, SystemExit): cleanAndExit() ``` In order to use this code, I had to make use of Node's Spawn Child Process API. The child process API is responsible for spinning up the Python process on a separate thread. Here's what that looks like. ``` javascript const spawn = require('child_process').spawn; class Scale { constructor(client) { // Spin up the child process when the Scale is initialized this.process = spawn('python', './hx711py/scale.py'], { detached: true, }); } getWeight() { // Takes stdout data from Python child script which executed // with arguments and send this data to res object this.process.stdout.on('data', data => { // The data is returned from the Python process as a string // We need to parse it to a float this.currWeight = parseFloat(data); // If a cat is present - do something if (this.isCatPresent() { this.handleCatInBoxEvent(); } }); this.process.stderr.on('data', err => { handleError(String(err)); }); this.process.on('close', (code, signal) => { console.log( `child process exited with code ${code} and signal ${signal}` ); }); } [...] } module.exports = Scale; ``` This was the first time I have played around with the Spawn Child Process API from Node. Personally, I was really impressed by how easy it was to use and troubleshoot. It's not the most elegant solution, but it totally works for my project and it uses some cool features of Node. Let's take a look at what the load cells look like in action. In the video below, you can see how pressure placed on the load cells is registered as a weight measurement from the Raspberry Pi. ![Load Cell Demo ## How to Handle IoT Data Okay, so as a software engineer at MongoDB, I would be remiss if I didn't talk about what to do with all of the data from this IoT device. For my IoT Litter Box, I am saving all of the data in a fully managed database service on MongoDB Atlas. Here's how I connected the litter box to the MongoDB Atlas database. ``` javascript const MongoClient = require('mongodb').MongoClient; const uri = 'YOUR MONGODB URI HERE' const client = new MongoClient(uri, { useNewUrlParser: true }); client.connect(err => { const collection = client.db('IoT').collection('toilets'); // perform actions on the collection object client.close(); }); ``` ### IoT Data Best Practices There are a lot of places to store your IoT data these days, so I want to talk about what you should look for when you are evaluating data platforms. #### High Database Throughput First thing when selecting a database for your IoT project, you need to ensure that you database is able to handle a massive amount of concurrent writes. Most IoT architectures are write-heavy, meaning that you are writing more data to your database then reading from it. Let's say that I decide to start mass manufacturing and selling my IoT Kitty Litter Boxes. Once I deploy a couple of thousand boxes in the wild, my database could potentially have a massive amount of concurrent writes if all of the cats go to the bathroom at the same time! That's going to be a lot of incoming data my database will need to handle! #### Flexible Data Schema You should also consider a database that is able to handle a flexible schema. This is because it is common to either add or upgrade sensors on an IoT device. For example, on my litter box, I was able to easily update my schema to add the switch data when I decided to start tracking how often the box gets cleaned. #### Your Database Should Easily Time Series Data Lastly, you will want to select a database that natively handles time series data. Consider how your data will be used. For most IoT projects, the data will be collected, analyzed, and visualized on a graph or chart over time. For my IoT Litter Box, my database schema looks like the following. ``` json { "_id": { "$oid": "dskfjlk2j92121293901233" }, "timestamp_day": { "$date": { "$numberLong": "1573854434214" } }, "type": "cat_in_box", "cat": { "name": "BMO", "weight": "200" }, "owner": "Joe Karlsson", "events": { "timestamp_event": { "$date": { "$numberLong": "1573854435016" } }, "weight": { "$numberDouble": "15.593333333" } }, { "timestamp_event": { "$date": { "$numberLong": "1573854435824" } }, "weight": { "$numberDouble": "15.132222222" } }, { "timestamp_event": { "$date": { "$numberLong": "1573854436632"} }, "type": "maintenance" } ] } ``` ## Summary Alright, let's wrap this party up. In this post, we talked about why you should consider using Node for your next IoT project: It's easy to update over a network, the internet already speaks JavaScript, there are tons of existing libraries/plugins/APIs (including [CylonJS and Johnny-Five), and JavaScript is great at handling event-driven apps. We looked at a real-life Node-based IoT project, my IoT Kitty Litter Box. Then, we dug into the code base for the IoT Kitty Litter Box. We also discussed what to look for when selecting a database for IoT projects: It should be able to concurrently write data quickly, have a flexible schema, and be able to handle time series data. What's next? Well, if I have inspired you to get started on your own IoT project, I say, "Go for it!" Pick out a project, even if it's "crappy," and build it. Google as you go, and make mistakes. I think it's the best way to learn. I hereby give you permission to make stupid stuff just for you, something to help you learn and grow as a human being and a software engineer. >When you're ready to build your own IoT device, check out MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB and has a generous, forever-free tier. ## Related Links Check out the following resources for more information: - Bringing JavaScript to the IoT Edge - Joe Karlsson \| Node + JS Interactive 2019. - IoT Kitty Litter Box Source Code. - Want to learn more about MongoDB? Be sure to take a class on the MongoDB University. - Have a question, feedback on this post, or stuck on something be sure to check out and/or open a new post on the MongoDB Community Forums. - Quick Start: Node.js. - Want to check out more cool articles about MongoDB? Be sure to check out more posts like this on the MongoDB Developer Hub.
md
{ "tags": [ "JavaScript", "RaspberryPi" ], "pageDescription": "Learn all about developing IoT projects using JS and MongoDB by building an smart toilet for your cat! Click here for more!", "contentType": "Code Example" }
An Introduction to IoT (Internet of Toilets)
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/push-notifications-atlas-app-services-realm-sdk
created
# Push Notifications Using Atlas App Services & iOS Realm SDK In a serverless application, one of the important features that we must implement for the success of our application is push notifications. Realm allows us to have a complete push notification system directly in our Services App. To do this, we’ll make use of several components that we’ll explain here and develop in this tutorial. But first, let’s describe what our application does. ## Context The application consists of a public list of books that are stored locally on the device thanks to Atlas Device Sync, so we can add/delete or update the books directly in our Atlas collection and the changes will be reflected in our application. We can also add as favorites any book from the list, although to do so, it’ll be necessary to register beforehand using an email and password. We will integrate email/password authentication in our application through the Atlas App Services authentication providers. The books, as they belong to a collection synchronized with Atlas Device Sync, will not only be stored persistently on our device but will also be synchronized with our user. This means that we can retrieve the list of favorites on any device where we register with our credentials. Changes made to our favorites using other devices are automatically synchronized. ### Firebase The management of push notifications will be done through Firebase Cloud Messaging. In this way, we benefit from a single service to send notifications to iOS, web, and Android. The configuration is similar to the one we would follow for any other application. The difference is that we will install the firebase-admin SDK in our Atlas App Services application. ### Triggers The logic of this application for sending push notifications will be done through triggers. To do this, we will define two use cases: 1. A book has been added or deleted: For this, we will make use of the topics in Firebase, so when a user registers to receive this type of notification, they will receive a message every time a book is added/deleted from the general list of books. 2. A book added to my favorites list has been modified: We will make use of the Firebase tokens for each device. We relate the token received to the user so that when a book is modified, only the user/s that have it in their favorites list will receive the notification. ### Functions The Atlas Triggers will have a function linked that will apply the logic for sending push notifications to the end devices. We will make use of the Firebase Admin SDK that we will install in our App Services App as a dependency. ## Overall application logic This application will show how we can integrate push notifications with an iOS application developed in Swift. We will discuss how we have created each part with code, diagrams, and example usage. At the end of this tutorial, you’ll find a link to a Github repository where you’ll find both the code of the iOS application as well as the code of the App Services application. When we start the application for the first time, we log in using anonymous authentication, since to view the list of books, it’s not necessary to register using email/password. However, an anonymous user will still be created and saved in a collection Users in Atlas. When we first access our application and enable push notifications, in our code, we register with Firebase. This will generate a registration token, also known as FCMToken, that we will use later to send custom push notifications. ```swift Messaging.messaging().token { token, error in if let error = error { print("Error fetching FCM registration token: \(error)") } else if let token = token { print("FCM registration token: \(token)") // Save token in user collection user.functions.updateFCMUserToken(AnyBSON(token), AnyBSON("add")], self.onCustomDataUpdated(result:realmError:)) } } ``` Once we obtain the FCM token, we will [call a function through the Realm SDK to save this token in the user document corresponding to the logged-in user. Within the document, we have defined a token field that will be composed of an array of FCM tokens. To do this, we will make use of the Firebase SDK and the `Messaging` method, so that we are notified every time the token changes or a new token is generated. In our Swift code, we will use this function to insert a new FCToken for our user. ```swift Messaging.messaging().token { token, error in if let error = error { print("Error fetching FCM registration token: \(error)") } else if let token = token { print("FCM registration token: \(token)") // Save token in user collection user.functions.updateFCMUserToken(AnyBSON(token), AnyBSON("add")], self.onCustomDataUpdated(result:realmError:)) } } ``` In our App Services app, we must implement the logic of the `updateFCMUserToken` function that will store the token in our user document. #### Function code in Atlas ```javascript exports = function(FCMToken, operation) { const db = context.services.get("mongodb-atlas").db("product"); const userData = db.collection("customUserData"); if (operation === "add") { console.log("add"); userData.updateOne({"userId": context.user.id}, { "$addToSet": { "FCMToken": FCMToken } }).then((doc) => { return {success: `User token updated`}; }).catch(err => { return {error: `User ${context.user.id} not found`}; }); } else if (operation === "remove") { console.log("remove"); } }; ``` We have decided to save an array of tokens to be able to send a notification to each device that the same user has used to access the application. The following is an example of a User document in the collection: ```JSON { "_id": { "$oid": "626c213ece7819d62ebbfb99" }, "color": "#1AA7ECFF", "fullImage": false, "userId": "6268246e3e0b17265d085866", "bookNotification": true, "FCMToken": [ "as3SiBN0kBol1ITGdBqGS:APA91bERtZt-O-jEg6jMMCjPCfYdo1wmP9RbeATAXIQKQ3rfOqj1HFmETvdqm2MJHOhx2ZXqGJydtMWjHkaAD20A8OtqYWU3oiSg17vX_gh-19b85lP9S8bvd2TRsV3DqHnJP8k-t2WV", "e_Os41X5ikUMk9Kdg3-GGc:APA91bFzFnmAgAhbtbqXOwD6NLnDzfyOzYbG2E-d6mYOQKZ8qVOCxd7cmYo8X3JAFTuXZ0QUXKJ1bzSzDo3E0D00z3B4WFKD7Yqq9YaGGzf_XSUcCexDTM46bm4Ave6SWzbh62L4pCbS" ] } ``` ## Send notification to a topic Firebase allows us to subscribe to a topic so that we can send a notification to all devices that have ever subscribed to it without the need to send the notification to specific device tokens. In our application, once we have registered using an email and password, we can subscribe to receive notifications every time a new book is added or deleted. ![Setting view in the iOS app When we activate this option, what happens is that we use the Firebase SDK to register in the topic books. ```swift static let booksTopic = "books" @IBAction func setBookPushNotification(_ sender: Any) { if booksNotificationBtn.isOn { Messaging.messaging().subscribe(toTopic: SettingsViewController.booksTopic) print("Subscribed to \(SettingsViewController.booksTopic)") } else { Messaging.messaging().unsubscribe(fromTopic: SettingsViewController.booksTopic) print("Unsubscribed to \(SettingsViewController.booksTopic)") } } ``` ### How does it work? The logic we follow will be as below: In our Atlas App Services App, we will have a database trigger that will monitor the Books collection for any new inserts or deletes. Upon the occurrence of either of these two operations, the linked function shall be executed and send a push notification to the “books” topic. To configure this trigger, we’ll make use of two very important options: * **Full Document**: This will allow us to receive the document created or modified in our change event. * **Document Pre-Image**: For delete operations, we will receive the document that was modified or deleted before your change event. With these options, we can determine which changes have occurred and send a message using the title of the book to inform about the change. The configuration of the trigger in the App Services UI will be as follows: The function linked to the trigger will determine whether the operation occurred as an `insert` or `delete` and send the push notification to the topic **books** with the title information. Function logic: ```javascript const admin = require('firebase-admin'); admin.initializeApp({ credential: admin.credential.cert({ projectId: context.values.get('projectId'), clientEmail: context.values.get('clientEmail'), privateKey: context.values.get('fcm_private_key_value').replace(/\\n/g, '\n'), }), }); const topic = 'books'; const message = {topic}; if (changeEvent.operationType === 'insert') { const name = changeEvent.fullDocument.volumeInfo.title; const image = changeEvent.fullDocument.volumeInfo.imageLinks.smallThumbnail; message.notification = { body: `${name} has been added to the list`, title: 'New book added' }; if (image !== undefined) { message.apns = { payload: { aps: { 'mutable-content': 1 } }, fcm_options: { image } }; } } else if (changeEvent.operationType === 'delete') { console.log(JSON.stringify(changeEvent)); const name = changeEvent.fullDocumentBeforeChange.volumeInfo.title; message.notification = { body: `${name} has been deleted from the list`, title: 'Book deleted' }; } admin.messaging().send(message) .then((response) => { // Response is a message ID string. console.log('Successfully sent message:', response); return true; }) .catch((error) => { console.log('Error sending message:', error); return false; }); ``` When someone adds a new book, everyone who opted-in for push notifications will receive the following: ## Send notification to a specific device To send a notification to a specific device, the logic will be somewhat different. For this use case, every time a book is updated, we will search if it belongs to the favourites list of any user. For those users who have such a book, we will send a notification to all registered tokens. This will ensure that only users who have added the updated book to their favorites will receive a notification alerting them that there has been a change. ### How does it work? For this part, we will need a database trigger that will monitor for updates operations on the books collection. The configuration of this trigger is much simpler, as we only need to monitor the `updates` that occur in the book collection. The configuration of the trigger in our UI will be as follows: When such an operation occurs, we’ll check if there is any user who has added that book to their favorites list. If there is, we will create a new document in the ***pushNotifications*** collection. This auxiliary collection is used to optimize the sending of push notifications and handle exceptions. It will even allow us to set up a monitoring system as well as retries. Every time we send a notification, we’ll insert a document with the following: 1. The changes that occurred in the original document. 2. The FCM tokens of the recipient devices. 3. The date when the notification was registered. 4. A processed property to know if the notification has been sent. Here’s an example of a push notification document: ```JSON { "_id": { "$oid": "62a0da5d860040b7938eab87" }, "token": "e_OpA2X6ikUMk9Kdg3-GGc:APA91bFzFnmAgAhbtbqXOwD6NLnDzfyOzYbG2E-d6mYOQKZ8qVOCxd7cmYo8X3JAFTuXZ0QUXKJ1bzSzDo3E0D00z3B4WFKD7Yqq9YaGGzf_XSUcCexDTM46bm4Ave6SWzbh62L4pCbS", "fQvffGBN2kBol1ITGdBqGS:APA91bERtZt-O-jEg6jMMCjPCfYdo1wmP9RbeATAXIQKQ3rfOqj1HFmETvdqm2MJHOhx2ZXqGJydtMWjHkaAD20A8OtqYWU3oiSg17vX_gh-19b85lP9S8bvd2TRsV3DqHnJP8k-t2WV" ], "date": { "$date": { "$numberLong": "1654708829678" } }, "processed": true, "changes": { "volumeInfo": { "title": "Pacific on Linguistics", "publishedDate": "1964", "industryIdentifiers": [ { "type": "OTHER", "identifier": "UOM:39015069017963" } ], "readingModes": { "text": false, "image": false }, "categories": [ "Linguistics" ], "imageLinks": { "smallThumbnail": "http://books.google.com/books/content?id=aCVZAAAAMAAJ&printsec=frontcover&img=1&zoom=5&source=gbs_api", "thumbnail": "http://books.google.com/books/content?id=aCVZAAAAMAAJ&printsec=frontcover&img=1&zoom=1&source=gbs_api" }, "language": "en" } } } ``` To process the notifications, we’ll have a database trigger that will monitor the ***pushNotifications*** collection, and each new document will send a notification to the tokens of the client devices. #### Function logic ```javascript exports = async function(changeEvent) { const admin = require('firebase-admin'); const db = context.services.get('mongodb-atlas').db('product'); const id = changeEvent.documentKey._id; const bookCollection = db.collection('book'); const pushNotification = db.collection('pushNotification'); admin.initializeApp({ credential: admin.credential.cert({ projectId: context.values.get('projectId'), clientEmail: context.values.get('clientEmail'), privateKey: context.values.get('fcm_private_key_value').replace(/\\n/g, '\n'), }), }); const registrationToken = changeEvent.fullDocument.token; console.log(JSON.stringify(registrationToken)); const title = changeEvent.fullDocument.changes.volumeInfo.title; const image = changeEvent.fullDocument.changes.volumeInfo.imageLinks.smallThumbnail; const message = { notification:{ body: 'One of your favorites changed', title: `${title} changed` }, tokens: registrationToken }; if (image !== undefined) { message.apns = { payload: { aps: { 'mutable-content': 1 } }, fcm_options: { image } }; } // Send a message to the device corresponding to the provided // registration token. admin.messaging().sendMulticast(message) .then((response) => { // Response is a message ID string. console.log('Successfully sent message:', response); pushNotification.updateOne({'_id': BSON.ObjectId(`${id}`)},{ "$set" : { processed: true } }); }) .catch((error) => { console.log('Error sending message:', error); }); }; ``` Example of a push notification to a user: ![ ## Repository The complete code for both the App Services App as well as for the iOS application can be found in a dedicated GitHub repository. If you have found this tutorial useful, let me know so I can continue to add more information as well as step-by-step videos on how to do this. And if you’re as excited about Atlas App Services as I am, create your first free App today!
md
{ "tags": [ "Atlas", "Swift", "JavaScript", "Google Cloud", "iOS" ], "pageDescription": "Use our Atlas App Services application to create a complete push notification system that fits our business logic.", "contentType": "Tutorial" }
Push Notifications Using Atlas App Services & iOS Realm SDK
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/realm/realm-database-new-data-types
created
# New Realm Data Types: Dictionaries/Maps, Sets, Mixed, and UUIDs ## TL;DR Starting with Realm Javascript 10.5, Realm Cocoa 10.8, Realm .NET 10.2, and Realm Java 10.6, developers will be able persist and query new language specific data types in the Realm Database. These include Dictionaries/Maps, Sets, a Mixed type, and UUIDs. ## Introduction We're excited to announce that the Realm SDK team has shipped four new data types for the Realm Mobile Database. This work – prioritized in response to community requests – continues to make using the Realm SDKs an intuitive, idiomatic experience for developers. It eliminates even more boilerplate code from your codebase, and brings the data layer closer to your app's source code. These new types make it simple to model flexible data in Realm, and easier to work across Realm and MongoDB Atlas. Mobile developers who are building with Realm and MongoDB Realm Sync can leverage the flexibility of MongoDB's data structure in their offline-first mobile applications. Read on to learn more about each of the four new data types we've released, and see examples of when and how to use them in your data modeling: - Dictionaries/Maps - Mixed - Sets - UUIDs > **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free! ## Dictionaries/Maps Dictionaries/maps allow developers to store data in arbitrary key-value pairs. They're used when a developer wants to add flexibility to data models that may evolve over time, or handle unstructured data from a remote endpoint. They also enable a mobile developer to store unique key-value pairs without needing to check the data for uniqueness before insertion. Both Dictionaries and Maps can be useful when working with REST APIs, where extra data may be returned that's not defined in a mobile app's schema. Mobile developers who need to future-proof their schema against rapidly changing feature requirements and future product iterations will also find it useful to work with these new data types in Realm. Consider a gaming app that has multiple games within it, and a single Player class. The developer building the app knows that future releases will need to enable new functionality, like a view of player statistics and a leaderboard. But the Player can serve in different roles for each available game. This makes defining a strict structure for player statistics difficult. With Dictionary/Map data types, the developer can place a gameplayStats field on the Player class as a dictionary. Using this dictionary, it's simple to display a screen that shows the player's most common roles, the games they've competed in, and any relevant statistics that the developer wants to include on the leaderboard. After the leaderboard has been released and iterated on, the developer can look to migrate their Dictionary to a more formally structured class as part of a formal feature. ::::tabs :::tab]{tabid="Kotlin"} ``` kotlin import android.util.Log import io.realm.Realm import io.realm.RealmDictionary import io.realm.RealmObject import io.realm.kotlin.where import kotlinx.coroutines.flow.Flow import kotlinx.coroutines.flow.flow import java.util.AbstractMap open class Player : RealmObject() { var name: String? = null var email: String? = null var playerHandle: String? = null var gameplayStats: RealmDictionary = RealmDictionary() var competitionStats: RealmDictionary = RealmDictionary() } realm.executeTransactionAsync { r: Realm -> val player = Player() player.playerHandle = "iDubs" // get the RealmDictionary field from the object we just created and add stats player.gameplayStats = RealmDictionary(mapOf()) .apply { "mostCommonRole" to "Medic" "clan" to "Realmers" "favoriteMap" to "Scorpian Bay" "tagLine" to "Always be Healin" "nemesisHandle" to "snakeCase4Life" } player.competitionStats = RealmDictionary(mapOf()).apply { "EastCoastInvitational" to "2nd Place" "TransAtlanticOpen" to "4th Place" } r.insert(player) } // Developer implements a Competitions View - // emit all entries in the dictionary for view by the user val player = realm.where().equalTo("name", "iDubs").findFirst() player?.let { player.competitionStats.addChangeListener { map, changes -> val insertions = changes.insertions for (insertion in insertions) { Log.v("EXAMPLE", "Player placed at a new competition $insertion") } } } fun competitionFlow(): Flow = flow { for ((competition, place) in player!!.competitionStats) { emit("$competition - $place") } } // Build a RealmQuery that searches the Dictionary type val query = realm.where().equalTo("name", "iDubs") val entry = AbstractMap.SimpleEntry("nemesisHandle", "snakeCase4Life") val playerQuery = query.containsEntry("gameplayStats", entry).findFirst() // remove player nemesis - they are friends now! realm.executeTransaction { r: Realm -> playerQuery?.gameplayStats?.remove("nemesisHandle") } ``` ::: :::tab[]{tabid="Swift"} ``` swift import Foundation import RealmSwift class Player: Object { @objc dynamic var name: String? @objc dynamic var email: String? @objc dynamic var playerHandle: String? let gameplayStats = Map() let competitionStats = Map() } let realm = try! Realm() try! realm.write { let player = Player() player.name = "iDubs" // get the Map field from the object we just created and add stats let statsDictionary = player.gameplayStats statsDictionary["mostCommonRole"] = "Medic" statsDictionary["clan"] = "Realmers" statsDictionary["favoriteMap"] = "Scorpian bay" statsDictionary["tagLine"] = "Always Be Healin" statsDictionary["nemesisHandle"] = "snakeCase4Life" let competitionStats = player.competitionStats competitionStats["EastCoastInvitational"] = "2nd Place" competitionStats["TransAtlanticOpen"] = "4th Place" realm.add(player) // Developer implements a Competitions View - // emit all entries in the dictionary for view by the user // query for all Player objects let players = realm.objects(Player.self) // run the `.filter()` method on all the returned Players to find the competition rankings let playerQuery = players.filter("name == 'iDubs'") guard let competitionDictionary = playerQuery.first?.competitionStats else { return } for entry in competitionDictionary { print("Competition: \(entry.key)") print("Place: \(entry.value)") } // Set up the listener to watch for new competition rankings var notificationToken = competitionDictionary.observe(on: nil) { changes in switch changes { case .update(_, _, let insertions, _): for insertion in insertions { let insertedCompetition = competitionDictionary[insertion] print("Player placed at a new competition \(insertedCompetition ?? "")") } default: print("Only handling updates") } } } ``` ::: :::tab[]{tabid="JavaScript"} ``` javascript const PlayerSchema = { name: "Player", properties: { name: "string?", email: "string?", playerHandle: "string?", gameplayStats: "string{}", competitionStats: "string{}", }, }; let player; realm.write(() => { player = realm.create("Player", { name: "iDubs", gameplayStats: { mostCommonRole: "Medic", clan: "Realmers", favoriteMap: "Scorpian Bay", tagLine: "Always Be Healin", nemesisHandle: "snakeCase4Life", }, competitionStats: { EastCoastInvitational: "2nd Place", TransAtlanticOpen: "4th Place", } }); // query for all Player objects const players = realm.objects("Player"); // run the `.filtered()` method on all the returned Players to find the competition rankings const playerQuery = players.filtered("name == 'iDubs'"); // Developer implements a Competitions View - // emit all entries in the dictionary for the user to view const competitionDictionary = playerQuery.competitionStats; if(competitionDictionary != null){ Object.keys(competitionDictionary).forEach(key => { console.log(`"Competition: " ${key}`); console.log(`"Place: " ${p[key]}`); } } // Set up the listener to watch for new competition rankings playerQuery.addListener((changedCompetition, changes) => { changes.insertions.forEach((index) => { const insertedCompetition = changedCompetition[index]; console.log(`"Player placed at a new competition " ${changedCompetition.@key}!`); }); // Build a RealmQuery that searches the Dictionary type const playerNemesis = playerQuery.filtered( `competitionStats.@keys = "playerNemesis" ` ); // remove player nemesis - they are friends now! if(playerNemesis != null){ realm.write(() => { playerNemesis.remove(["playerNemesis"]); }); } ``` ::: :::tab[]{tabid=".NET"} ``` csharp public class Player : RealmObject { public string Name { get; set; } public string Email { get; set; } public string PlayerHandle { get; set; } [Required] public IDictionary GamePlayStats { get; } [Required] public IDictionary CompetitionStats { get; } } realm.Write(() => { var player = realm.Add(new Player { PlayerHandle = "iDubs" }); // get the RealmDictionary field from the object we just created and add stats var statsDictionary = player.GamePlayStats; statsDictionary["mostCommonRole"] = "Medic"; statsDictionary["clan"] = "Realmers"; statsDictionary["favoriteMap"] = "Scorpian Bay"; statsDictionary["tagLine"] = "Always be Healin"; statsDictionary["nemesisHandle"] = "snakeCase4Life"; var competitionStats = player.CompetitionStats; competitionStats["EastCoastInvitational"] = "2nd Place"; competitionStats["TransAtlanticOpen"] = "4th Place"; }); // Developer implements a Competitions View - // emit all entries in the dictionary for view by the user var player = realm.All().Single(t => t.Name == "iDubs"); // Loop through one by one foreach (var competition in player.CompetitionStats) { Debug.WriteLine("Competition: " + $"{competition.Key}"); Debug.WriteLine("Place: " + $"{competition.Value}"); } // Set up the listener to emit a new competitions var token = competitionStats. SubscribeForKeyNotifications((dict, changes, error) => { if (changes == null) { return; } foreach (var key in changes.InsertedKeys) { Debug.WriteLine($"Player placed at a new competition: {key}: {dict[key]}"); } }); // Build a RealmQuery that searches the Dictionary type var snakeCase4LifeEnemies = realm.All.Filter("GamePlayStats['playerNemesis'] == 'snakeCase4Life'"); // snakeCase4Life has changed their attitude and are no longer // at odds with anyone realm.Write(() => { foreach (var player in snakeCase4LifeEnemies) { player.GamePlayStats.Remove("playerNemesis"); } }); ``` ::: :::: ## Mixed Realm's Mixed type allows any Realm primitive type to be stored in the database, helping developers when strict type-safety isn't appropriate. Developers may find this useful when dealing with data they don't have total control over – like receiving data and values from a third-party API. Mixed data types are also useful when dealing with legacy states that were stored with the incorrect types. Converting the type could break other APIs and create considerable work. With Mixed types, developers can avoid this difficulty and save hours of time. We believe Mixed data types will be especially valuable for users who want to sync data between Realm and MongoDB Atlas. MongoDB's document-based data model allows a single field to support many types across documents. For users syncing data between Realm and Atlas, the new Mixed type allows developers to persist data of any valid Realm primitive type, or any Realm Object class reference. Developers don't risk crashing their app because a field value violated type-safety rules in Realm. ::::tabs :::tab[]{tabid="Kotlin"} ``` kotlin import android.util.Log import io.realm.* import io.realm.kotlin.where open class Distributor : RealmObject() { var name: String = "" var transitPolicy: String = "" } open class Business : RealmObject() { var name: String = "" var deliveryMethod: String = "" } open class Individual : RealmObject() { var name: String = "" var salesTerritory: String = "" } open class Palette(var owner: RealmAny = RealmAny.nullValue()) : RealmObject() { var scanId: String? = null open fun ownerToString(): String { return when (owner.type) { RealmAny.Type.NULL -> { "no owner" } RealmAny.Type.STRING -> { owner.asString() } RealmAny.Type.OBJECT -> { when (owner.valueClass) { is Business -> { val business = owner.asRealmModel(Business::class.java) business.name } is Distributor -> { val distributor = owner.asRealmModel(Distributor::class.java) distributor.name } is Individual -> { val individual = owner.asRealmModel(Individual::class.java) individual.name } else -> "unknown type" } } else -> { "unknown type" } } } } realm.executeTransaction { r: Realm -> val newDistributor = r.copyToRealm(Distributor().apply { name = "Warehouse R US" transitPolicy = "Onsite Truck Pickup" }) val paletteOne = r.copyToRealm(Palette().apply { scanId = "A1" }) // Add the owner of the palette as an object reference to another Realm class paletteOne.owner = RealmAny.valueOf(newDistributor) val newBusiness = r.copyToRealm(Business().apply { name = "Mom and Pop" deliveryMethod = "Cheapest Private Courier" }) val paletteTwo = r.copyToRealm(Palette().apply { scanId = "B2" owner = RealmAny.valueOf(newBusiness) }) val newIndividual = r.copyToRealm(Individual().apply { name = "Traveling Salesperson" salesTerritory = "DC Corridor" }) val paletteThree = r.copyToRealm(Palette().apply { scanId = "C3" owner = RealmAny.valueOf(newIndividual) }) } // Get a reference to palette one val paletteOne = realm.where() .equalTo("scanId", "A1") .findFirst()!! // Extract underlying Realm Object from RealmAny by casting it RealmAny.Type.OBJECT val ownerPaletteOne: Palette = paletteOne.owner.asRealmModel(Palette::class.java) Log.v("EXAMPLE", "Owner of Palette One: " + ownerPaletteOne.ownerToString()) // Get a reference to the palette owned by Traveling Salesperson // so that you can remove ownership - they're broke! val salespersonPalette = realm.where() .equalTo("owner.name", "Traveling Salesperson") .findFirst()!! val salesperson = realm.where() .equalTo("name", "Traveling Salesperson") .findFirst() realm.executeTransaction { r: Realm -> salespersonPalette.owner = RealmAny.nullValue() } val paletteTwo = realm.where() .equalTo("scanId", "B2") .findFirst()!! // Set up a listener to see when Ownership changes for relabeling of palettes val listener = RealmObjectChangeListener { changedPalette: Palette, changeSet: ObjectChangeSet? -> if (changeSet != null && changeSet.changedFields.contains("owner")) { Log.i("EXAMPLE", "Palette $'paletteTwo.scanId' has changed ownership.") } } // Observe object notifications. paletteTwo.addChangeListener(listener) ``` ::: :::tab[]{tabid="Swift"} ``` swift import Foundation import RealmSwift class Distributor: Object { @objc dynamic var name: String? @objc dynamic var transitPolicy: String? } class Business: Object { @objc dynamic var name: String? @objc dynamic var deliveryMethod: String? } class Individual: Object { @objc dynamic var name: String? @objc dynamic var salesTerritory: String? } class Palette: Object { @objc dynamic var scanId: String? let owner = RealmProperty() var ownerName: String? { switch owner.value { case .none: return "no owner" case .string(let value): return value case .object(let object): switch object { case let obj as Business: return obj.name case let obj as Distributor: return obj.name case let obj as Individual: return obj.name default: return "unknown type" } default: return "unknown type" } } } let realm = try! Realm() try! realm.write { let newDistributor = Distributor() newDistributor.name = "Warehouse R Us" newDistributor.transitPolicy = "Onsite Truck Pickup" let paletteOne = Palette() paletteOne.scanId = "A1" paletteOne.owner.value = .object(newDistributor) let newBusiness = Business() newBusiness.name = "Mom and Pop" newBusiness.deliveryMethod = "Cheapest Private Courier" let paletteTwo = Palette() paletteTwo.scanId = "B2" paletteTwo.owner.value = .object(newBusiness) let newIndividual = Individual() newIndividual.name = "Traveling Salesperson" newIndividual.salesTerritory = "DC Corridor" let paletteThree = Palette() paletteTwo.scanId = "C3" paletteTwo.owner.value = .object(newIndividual) } // Get a Reference to PaletteOne let paletteOne = realm.objects(Palette.self) .filter("name == 'A1'").first // Extract underlying Realm Object from AnyRealmValue field let ownerPaletteOneName = paletteOne?.ownerName print("Owner of Palette One: \(ownerPaletteOneName ?? "not found")"); // Get a reference to the palette owned by Traveling Salesperson // so that you can remove ownership - they're broke! let salespersonPalette = realm.objects(Palette.self) .filter("owner.name == 'Traveling Salesperson").first let salesperson = realm.objects(Individual.self) .filter("name == 'Traveling Salesperson'").first try! realm.write { salespersonPalette?.owner.value = .none } let paletteTwo = realm.objects(Palette.self) .filter("name == 'B2'") ``` ::: :::tab[]{tabid="JavaScript"} ``` javascript const DistributorSchema = { name: "Distributor", properties: { name: "string", transitPolicy: "string", }, }; const BusinessSchema = { name: "Business", properties: { name: "string", deliveryMethod: "string", }, }; const IndividualSchema = { name: "Individual", properties: { name: "string", salesTerritory: "string", }, }; const PaletteSchema = { name: "Palette", properties: { scanId: "string", owner: "mixed", }, }; realm.write(() => { const newDistributor; newDistributor = realm.create("Distributor", { name: "Warehouse R Us", transitPolicy: "Onsite Truck Pickup" }); const paletteOne; paletteOne = realm.create("Palette", { scanId: "A1", owner: newDistributor }); const newBusiness; newBusiness = realm.create("Business", { name: "Mom and Pop", deliveryMethod: "Cheapest Private Courier" }); const paletteTwo; paletteTwo = realm.create("Palette", { scanId: "B2", owner: newBusiness }); const newIndividual; newIndividual = realm.create("Business", { name: "Traveling Salesperson", salesTerritory: "DC Corridor" }); const paletteThree; paletteThree = realm.create("Palette", { scanId: "C3", owner: newIndividual }); }); //Get a Reference to PaletteOne const paletteOne = realm.objects("Palette") .filtered(`scanId == 'A1'`); //Extract underlying Realm Object from mixed field const ownerPaletteOne = paletteOne.owner; console.log(`Owner of PaletteOne: " ${ownerPaletteOne.name}!`); //Get a reference to the palette owned by Traveling Salesperson // so that you can remove ownership - they're broke! const salespersonPalette = realm.objects("Palette") .filtered(`owner.name == 'Traveling Salesperson'`); let salesperson = realm.objects("Individual") .filtered(`name == 'Traveling Salesperson'`) realm.write(() => { salespersonPalette.owner = null }); // Observe the palette to know when the owner has changed for relabeling let paletteTwo = realm.objects("Palette") .filtered(`scanId == 'B2'`) function onOwnerChange(palette, changes) { changes.changedProperties.forEach((prop) => { if(prop == owner){ console.log(`Palette "${palette.scanId}" has changed ownership to "${palette[prop]}"`); } }); } paletteTwo.addListener(onOwnerChange); ``` ::: :::tab[]{tabid=".NET"} ``` csharp public class Distributor : RealmObject { public string Name { get; set; } public string TransitPolicy { get; set; } } public class Business : RealmObject { public string Name { get; set; } public string DeliveryMethod { get; set; } } public class Individual : RealmObject { public string Name { get; set; } public string SalesTerritory { get; set; } } public class Palette : RealmObject { public string ScanId { get; set; } public RealmValue Owner { get; set; } public string OwnerName { get { if (Owner.Type != RealmValueType.Object) { return null; } var ownerObject = Owner.AsRealmObject(); if (ownerObject.ObjectSchema.TryFindProperty("Name", out _)) { return ownerObject.DynamicApi.Get("Name"); } return "Owner has no name"; } } } realm.Write(() => { var newDistributor = realm.Add(new Distributor { Name = "Warehouse R Us", TransitPolicy = "Onsite Truck Pickup" }); realm.Add(new Palette { ScanId = "A1", Owner = newDistributor }); var newBusiness =realm.Add(new Business { Name = "Mom and Pop", DeliveryPolicy = "Cheapest Private Courier" }); realm.Add(new Palette { ScanId = "B2", Owner = newBusiness }); var newIndividual = realm.Add(new Individual { Name = "Traveling Salesperson", SalesTerritory = "DC Corridor" }); realm.Add(new Palette { ScanId = "C3", Owner = newIndividual }); }); // Get a Reference to PaletteOne var paletteOne = realm.All() .Single(t => t.ScanID == "A1"); // Extract underlying Realm Object from mixed field var ownerPaletteOne = paletteOne.Owner.AsRealmObject(); Debug.WriteLine($"Owner of Palette One is {ownerPaletteOne.OwnerName}"); // Get a reference to the palette owned by Traveling Salesperson // so that you can remove ownership - they're broke! var salespersonPalette = realm.All() .Filter("Owner.Name == 'Traveling Salesperson'") .Single(); realm.Write(() => { salespersonPalette.Owner = RealmValue.Null; }); // Set up a listener to observe changes in ownership so you can relabel the palette var paletteTwo = realm.All() .Single(p => p.ScanID == "B2"); paletteTwo.PropertyChanged += (sender, args) => { if (args.PropertyName == nameof(Pallette.Owner)) { Debug.WriteLine($"Palette {paletteTwo.ScanId} has changed ownership {paletteTwo.OwnerName}"); } }; ``` ::: :::: ## Sets Sets allow developers to store an unordered array of unique values. This new data type in Realm opens up powerful querying and mutation capabilities with only a few lines of code. With sets, you can compare data and quickly find matches. Sets in Realm have built-in methods for filtering and writing to a set that are unique to the type. Unique methods on the Set type include, isSubset(), contains(), intersects(), formIntersection, and formUnion(). Aggregation functions like min(), max(), avg(), and sum() can be used to find averages, sums, and similar. Sets in Realm have the potential to eliminate hundreds of lines of gluecode. Consider an app that suggests expert speakers from different areas of study, who can address a variety of specific topics. The developer creates two classes for this use case: Expert and Topic. Each of these classes has a Set field of strings which defines the disciplines the user is an expert in, and the fields that the topic covers. Sets will make the predicted queries easy for the developer to implement. An app user who is planning a Speaker Panel could see all experts who have knowledge of both "Autonomous Vehicles" and "City Planning." The application could also run a query that looks for experts in one or more of these disciples by using the built-in intersect method, and the user can use results to assemble a speaker panel. Developers who are using [MongoDB Realm Sync to keep data up-to-date between Realm and MongoDB Atlas are able to keep the semantics of a Set in place even when synchronizing data. You can depend on the enforced uniqueness among the values of a Set. There's no need to check the array for a value match before performing an insertion, which is a common implementation pattern that any user of SQLite will be familiar with. The operations performed on Realm Set data types will be synced and translated to documents using the $addToSet group of operations on MongoDB, preserving uniqueness in arrays. ::::tabs :::tab]{tabid="Kotlin"} ``` kotlin import android.util.Log import io.realm.* import io.realm.kotlin.where open class Expert : RealmObject() { var name: String = "" var email: String = "" var disciplines: RealmSet = RealmSet() } open class Topic : RealmObject() { var name: String = "" var location: String = "" var discussionThemes: RealmSet = RealmSet() var panelists: RealmList = RealmList() } realm.executeTransaction { r: Realm -> val newExpert = r.copyToRealm(Expert()) newExpert.name = "Techno King" // get the RealmSet field from the object we just created val disciplineSet = newExpert.disciplines // add value to the RealmSet disciplineSet.add("Trance") disciplineSet.add("Meme Coins") val topic = realm.copyToRealm(Topic()) topic.name = "Bitcoin Mining and Climate Change" val discussionThemes = topic.discussionThemes // Add a list of themes discussionThemes.addAll(listOf("Memes", "Blockchain", "Cloud Computing", "SNL", "Weather Disasters from Climate Change")) } // find experts for a discussion topic and add them to the panelists list val experts: RealmResults = realm.where().findAll() val topic = realm.where() .equalTo("name", "Bitcoin Mining and Climate Change") .findFirst()!! topic.discussionThemes.forEach { theme -> experts.forEach { expert -> if (expert.disciplines.contains(theme)) { topic.panelists.add(expert) } } } //observe the discussion themes set for any changes in the set val discussionTopic = realm.where() .equalTo("name", "Bitcoin Mining and Climate Change") .findFirst() val anotherDiscussionThemes = discussionTopic?.discussionThemes val changeListener = SetChangeListener { collection: RealmSet, changeSet: SetChangeSet -> Log.v( "EXAMPLE", "New discussion themes has been added: ${changeSet.numberOfInsertions}" ) } // Observe set notifications. anotherDiscussionThemes?.addChangeListener(changeListener) // Techno King is no longer into Meme Coins - remove the discipline realm.executeTransaction { it.where() .equalTo("name", "Techno King") .findFirst()?.let { expert -> expert.disciplines.remove("Meme Coins") } } ``` ::: :::tab[]{tabid="Swift"} ``` swift import Foundation import RealmSwift class Expert: Object { @objc dynamic var name: String? @objc dynamic var email: String? let disciplines = MutableSet() } class Topic: Object { @objc dynamic var name: String? @objc dynamic var location: String? let discussionThemes = MutableSet() let panelists = List() } let realm = try! Realm() try! realm.write { let newExpert = Expert() newExpert.name = "Techno King" newExpert.disciplines.insert("Trace") newExpert.disciplines.insert("Meme Coins") realm.add(newExpert) let topic = Topic() topic.name = "Bitcoin Mining and Climate Change" topic.discussionThemes.insert("Memes") topic.discussionThemes.insert("Blockchain") topic.discussionThemes.insert("Cloud Computing") topic.discussionThemes.insert("SNL") topic.discussionThemes.insert("Weather Disasters from Climate Change") realm.add(topic) } // find experts for a discussion topic and add them to the panelists list let experts = realm.objects(Expert.self) let topic = realm.objects(Topic.self) .filter("name == 'Bitcoin Mining and Climate Change'").first guard let topic = topic else { return } let discussionThemes = topic.discussionThemes for expert in experts where expert.disciplines.intersects(discussionThemes) { try! realm.write { topic.panelists.append(expert) } } // Observe the discussion themes set for new entries let notificationToken = discussionThemes.observe { changes in switch changes { case .update(_, _, let insertions, _): for insertion in insertions { let insertedTheme = discussionThemes[insertion] print("A new discussion theme has been added: \(insertedTheme)") } default: print("Only handling updates") } } // Techno King is no longer into Meme Coins - remove the discipline try! realm.write { newExpert.disciplines.remove("Meme Coins") } ``` ::: :::tab[]{tabid="JavaScript"} ``` javascript const ExpertSchema = { name: "Expert", properties: { name: "string?", email: "string?", disciplines: "string<>" }, }; const TopicSchema = { name: "Topic", properties: { name: "string?", locaton: "string?", discussionThemes: "string<>", //<> indicate a Set datatype panelists: "Expert[]" }, }; realm.write(() => { let newExpert; newExpert = realm.create("Expert", { name: "Techno King", disciplines: ["Trance", "Meme Coins"], }); let topic; topic = realm.create("Topic", { name: "Bitcoin Mining and Climate Change", discussionThemes: ["Memes", "Blockchain", "Cloud Computing", "SNL", "Weather Disasters from Climate Change"], }); }); // find experts for a discussion topic and add them to the panelists list const experts = realm.objects("Expert"); const topic = realm.objects("Topic").filtered(`name == 'Bitcoin Mining and Climate Change'`); const discussionThemes = topic.discussionThemes; for (int i = 0; i < discussionThemes.size; i++) { for (expert in experts){ if(expert.disciplines.has(dicussionThemes[i]){ realm.write(() => { realm.topic.panelists.add(expert) }); } } } // Set up the listener to watch for new discussion themes added to the topic discussionThemes.addListener((changedDiscussionThemes, changes) => { changes.insertions.forEach((index) => { const insertedDiscussion = changedDiscussionThemes[index]; console.log(`"A new discussion theme has been added: " ${insertedDiscussion}!`); }); // Techno King is no longer into Meme Coins - remove the discipline newExpert.disciplines.delete("Meme Coins") ``` ::: :::tab[]{tabid=".NET"} ``` csharp public class Expert : RealmObject { public string Name { get; set; } public string Email { get; set; } [Required] public ISet Disciplines { get; } } public class Topic : RealmObject { public string Name { get; set; } public string Location { get; set; } [Required] public ISet DiscussionThemes { get; } public IList Panelists { get; } } realm.Write(() => { var newExpert = realm.Add(new Expert { Name = "Techno King" }); newExpert.Disciplines.Add("Trance"); newExpert.Disciplines.Add("Meme Coins"); var topic = realm.Add(new Topic { Name = "Bitcoin Mining and Climate Change" }); topic.DiscussionThemes.Add("Memes"); topic.DiscussionThemes.Add("Blockchain"); topic.DiscussionThemes.Add("Cloud Computing"); topic.DiscussionThemes.Add("SNL"); topic.DiscussionThemes.Add("Weather Disasters from Climate Change"); }); // find experts for a discussion topic and add them to the panelists list var experts = realm.All(); var topic = realm.All() .Where(t => t.Name == "Bitcoin Mining and Climate Change"); foreach (expert in experts) { if (expert.Disciplines.Overlaps(topic.DiscussionThemes)) { realm.Write(() => { topic.Panelists.Add(expert); }); } } // Set up the listener to watch for new dicussion themes added to the topic var token = topic.DiscussionThemes .SubscribeForNotifications((collection, changes, error) => { foreach (var i in changes.InsertedIndices) { var insertedDiscussion = collection[i]; Debug.WriteLine($"A new discussion theme has been added to the topic {insertedDiscussion}"); } }); // Techno King is no longer into Meme Coins - remove the discipline newExpert.Disciplines.Remove("Meme Coins") ``` ::: :::: ## UUIDs The Realm SDKs also now support the ability to generate and persist Universally Unique Identifiers (UUIDs) natively. UUIDs are ubiquitous in app development as the most common type used for primary keys. As a 128-bit value, they have become the default for distributed storage of data in mobile to cloud application architectures - making collisions unheard of. Previously, Realm developers would generate a UUID and then cast it as a string to store in Realm. But we saw an opportunity to eliminate repetitive code, and with the release of UUID data types, Realm comes one step closer to boilerplate-free code. Like with the other new data types, the release of UUIDs also brings Realm's data types to parity with MongoDB. Now mobile application developers will be able to set UUIDs on both ends of their distributed datastore, and can rely on Realm Sync to perform the replication. ::::tabs :::tab[]{tabid="Kotlin"} ``` kotlin import io.realm.Realm import io.realm.RealmObject import io.realm.annotations.PrimaryKey import io.realm.annotations.RealmField import java.util.UUID; import io.realm.kotlin.where open class Task: RealmObject() { @PrimaryKey @RealmField("_id") var id: UUID = UUID.randomUUID() var name: String = "" var owner: String= "" } realm.executeTransaction { r: Realm -> // UUID field is generated automatically in the class constructor val newTask = r.copyToRealm(Task()) newTask.name = "Update to use new Data Types" newTask.owner = "Realm Developer" } val taskUUID: Task? = realm.where() .equalTo("_id", "38400000-8cf0-11bd-b23e-10b96e4ef00d") .findFirst() ``` ::: :::tab[]{tabid="Swift"} ``` swift import Foundation import RealmSwift class Task: Object { @objc dynamic var _id = UUID() @objc dynamic var name: String? @objc dynamic var owner: String? override static func primaryKey() -> String? { return "_id" } convenience init(name: String, owner: String) { self.init() self.name = name self.owner = owner } } let realm = try! Realm() try! realm.write { // UUID field is generated automatically in the class constructor let newTask = Task(name: "Update to use new Data Types", owner: "Realm Developers") } let uuid = UUID(uuidString: "38400000-8cf0-11bd-b23e-10b96e4ef00d") // Set up the query to retrieve the object with the UUID let predicate = NSPredicate(format: "_id = %@", uuid! as CVarArg) let taskUUID = realm.objects(Task.self).filter(predicate).first ``` ::: :::tab[]{tabid="JavaScript"} ``` javascript const { UUID } = Realm.BSON; const TaskSchema = { name: "Task", primaryKey: "_id", properties: { _id: "uuid", name: "string?", owner: "string?" }, }; let task; realm.write(() => { task = realm.create("Task", { _id: new UUID(), name: "Update to use new Data Type", owner: "Realm Developers" }); let searchUUID = UUID("38400000-8cf0-11bd-b23e-10b96e4ef00d"); const taskUUID = realm.objects("Task") .filtered(`_id == $0`, searchUUID); ``` ::: :::tab[]{tabid=".NET"} ``` csharp public class Task : RealmObject { [PrimaryKey] [MapTo("_id")] public Guid Id { get; private set; } = Guid.NewGuid(); public string Name { get; set; } public string Owner { get; set; } } realm.Write(() => { realm.Add(new Task { PlayerHandle = "Update to use new Data Type", Owner = "Realm Developers" }); }); var searchGUID = Guid.Parse("38400000-8cf0-11bd-b23e-10b96e4ef00d"); var taskGUID = realm.Find(searchGUID); ``` ::: :::: ## Conclusion From the beginning, Realm's engineering team has believed that the best line of code is the one a developer doesn't need to write. With the release of these unique types for mobile developers, we're eliminating the workarounds – the boilerplate code and negative impact on CPU and memory – that are commonly required with certain data structures. And we're doing it in a way that's idiomatic to the platform you're building on. By making it simple to query, store, and sync your data, all in the format you need, we hope we've made it easier for you to focus on building your next great app. Stay tuned by following [@realm on Twitter. Want to Ask a Question? Visit our Forums. Want to be notified about upcoming Realm events, like talks on SwiftUI Best Practices or our new Kotlin Multiplatform SDK? Visit our Global Community Page.
md
{ "tags": [ "Realm" ], "pageDescription": "Four new data types in the Realm Mobile Database - Dictionaries, Mixed, Sets, and UUIDs - make it simple to model flexible data in Realm.", "contentType": "News & Announcements" }
New Realm Data Types: Dictionaries/Maps, Sets, Mixed, and UUIDs
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/stream-data-mongodb-bigquery-subscription
created
# Create a Data Pipeline for MongoDB Change Stream Using Pub/Sub BigQuery Subscription On 1st October 2022, MongoDB and Google announced a set of open source Dataflow templates for moving data between MongoDB and BigQuery to run analyses on BigQuery using BQML and to bring back inferences to MongoDB. Three templates were introduced as part of this release, including the MongoDB to BigQuery CDC (change data capture) template. This template requires users to run the change stream on MongoDB, which will monitor inserts and updates on the collection. These changes will be captured and pushed to a Pub/Sub topic. The CDC template will create a job to read the data from the topic and get the changes, apply the transformation, and write the changes to BigQuery. The transformations will vary based on the user input while running the Dataflow job. Alternatively, you can use a native Pub/Sub capability to set up a data pipeline between your MongoDB cluster and BigQuery. The Pub/Sub BigQuery subscription writes messages to an existing BigQuery table as they are received. Without the BigQuery subscription type, you need a pull or push subscription and a subscriber (such as Dataflow) that reads messages and writes them to BigQuery. This article explains how to set up the BigQuery subscription to process data read from a MongoDB change stream. As a prerequisite, you’ll need a MongoDB Atlas cluster. > To set up a free tier cluster, you can register to MongoDB either from Google Cloud Marketplace or from the registration page. Follow the steps in the MongoDB documentation to configure the database user and network settings for your cluster. On Google Cloud, we will create a Pub/Sub topic, a BigQuery dataset, and a table before creating the BigQuery subscription. ## Create a BigQuery dataset We’ll start by creating a new dataset for BigQuery in Google Cloud console. Then, add a new table in your dataset. Define it with a name of your choice and the following schema: | Field name | Type | | --- | --- | | id | STRING | | source_data | STRING | | Timestamp | STRING | ## Configure Google Cloud Pub/Sub Next, we’ll configure a Pub/Sub schema and topic to ingest the messages from our MongoDB change stream. Then, we’ll create a subscription to write the received messages to the BigQuery table we just created. For this section, we’ll use the Google Cloud Pub/Sub API. Before proceeding, make sure you have enabled the API for your project. ### Define a Pub/Sub schema From the Cloud Pub/Sub UI, Navigate to _Create Schema_. Provide an appropriate identifier, such as “mdb-to-bq-schema,” to your schema. Then, select “Avro” for the type. Finally, add the following definition to match the fields from your BigQuery table: ```json { "type" : "record", "name" : "Avro", "fields" : { "name" : "id", "type" : "string" }, { "name" : "source_data", "type" : "string" }, { "name" : "Timestamp", "type" : "string" } ] } ``` ![Create a Cloud Pub/Sub schema ### Create a Pub/Sub topic From the sidebar, navigate to “Topics” and click on Create a topic. Give your topic an identifier, such as “MongoDBCDC.” Enable the Use a schema field and select the schema that you just created. Leave the rest of the parameters to default and click on _Create Topic_. ### Subscribe to topic and write to BigQuery From inside the topic, click on _Create new subscription_. Configure your subscription in the following way: - Provide a subscription ID — for example, “mdb-cdc.” - Define the Delivery type to _Write to BigQuery_. - Select your BigQuery dataset from the dropdown. - Provide the name of the table you created in the BigQuery dataset. - Enable _Use topic schema_. You need to have a `bigquery.dataEditor` role on your service account to create a Pub/Sub BigQuery subscription. To grant access using the `bq` command line tool, run the following command: ```sh bq add-iam-policy-binding \ --member="serviceAccount:[email protected]" \ --role=roles/bigquery.dataEditor \ -t ". " ``` Keep the other fields as default and click on _Create subscription_. ## Set up a change stream on a MongoDB cluster Finally, we’ll set up a change stream that listens for new documents inserted in our MongoDB cluster. We’ll use Node.js but you can adapt the code to a programming language of your choice. Check out the Google Cloud documentation for more Pub/Sub examples using a variety of languages. You can find the source code of this example in the dedicated GitHub repository. First, set up a new Node.js project and install the following dependencies. ```sh npm install mongodb @google-cloud/pubsub avro-js ``` Then, add an Avro schema, matching the one we created in Google Cloud Pub/Sub: **./document-message.avsc** ```json { "type": "record", "name": "DocumentMessage", "fields": { "name": "id", "type": "string" }, { "name": "source_data", "type": "string" }, { "name": "Timestamp", "type": "string" } ] } ``` Then create a new JavaScript module — `index.mjs`. Start by importing the required libraries and setting up your MongoDB connection string and your Pub/Sub topic name. If you don’t already have a MongoDB cluster, you can create one for free in [MongoDB Atlas. **./index.mjs** ```js import { MongoClient } from 'mongodb'; import { PubSub } from '@google-cloud/pubsub'; import avro from 'avro-js'; import fs from 'fs'; const MONGODB_URI = ''; const PUB_SUB_TOPIC = 'projects//topics/'; ``` After this, we can connect to our MongoDB instance and set up a change stream event listener. Using an aggregation pipeline, we’ll watch only for “insert” events on the specified collection. We’ll also define a 60-second timeout before closing the change stream. **./index.mjs** ```js let mongodbClient; try { mongodbClient = new MongoClient(MONGODB_URI); await monitorCollectionForInserts(mongodbClient, 'my-database', 'my-collection'); } finally { mongodbClient.close(); } async function monitorCollectionForInserts(client, databaseName, collectionName, timeInMs) { const collection = client.db(databaseName).collection(collectionName); // An aggregation pipeline that matches on new documents in the collection. const pipeline = { $match: { operationType: 'insert' } } ]; const changeStream = collection.watch(pipeline); changeStream.on('change', event => { const document = event.fullDocument; publishDocumentAsMessage(document, PUB_SUB_TOPIC); }); await closeChangeStream(timeInMs, changeStream); } function closeChangeStream(timeInMs = 60000, changeStream) { return new Promise((resolve) => { setTimeout(() => { console.log('Closing the change stream'); changeStream.close(); resolve(); }, timeInMs) }) }; ``` Finally, we’ll define the `publishDocumentAsMessage()` function that will: 1. Transform every MongoDB document received through the change stream. 1. Convert it to the data buffer following the Avro schema. 1. Publish it to the Pub/Sub topic in Google Cloud. ```js async function publishDocumentAsMessage(document, topicName) { const pubSubClient = new PubSub(); const topic = pubSubClient.topic(topicName); const definition = fs.readFileSync('./document-message.avsc').toString(); const type = avro.parse(definition); const message = { id: document?._id?.toString(), source_data: JSON.stringify(document), Timestamp: new Date().toISOString(), }; const dataBuffer = Buffer.from(type.toString(message)); try { const messageId = await topic.publishMessage({ data: dataBuffer }); console.log(`Avro record ${messageId} published.`); } catch(error) { console.error(error); } } ``` Run the file to start the change stream listener: ```sh node ./index.mjs ``` Insert a new document in your MongoDB collection to watch it go through the data pipeline and appear in your BigQuery table! ## Summary There are multiple ways to load the change stream data from MongoDB to BigQuery and we have shown how to use the BigQuery subscription on Pub/Sub. The change streams from MongoDB are monitored, captured, and later written to a Pub/Sub topic using Java libraries. The data is then written to BigQuery using BigQuery subscription. The datatype for the BigQuery table is set using Pub/Sub schema. Thus, the change stream data can be captured and written to BigQuery using the BigQuery subscription capability of Pub/Sub. ## Further reading 1. A data pipeline for [MongoDB Atlas and BigQuery using Dataflow. 1. Setup your first MongoDB cluster using Google Marketplace. 1. Run analytics using BigQuery using BigQuery ML. 1. How to publish a message to a topic with schema.
md
{ "tags": [ "MongoDB", "JavaScript", "Google Cloud", "Node.js", "AI" ], "pageDescription": "Learn how to set up a data pipeline from your MongoDB database to BigQuery using change streams and Google Cloud Pub/Sub.", "contentType": "Tutorial" }
Create a Data Pipeline for MongoDB Change Stream Using Pub/Sub BigQuery Subscription
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/introduction-to-gdelt-data
created
# An Introduction to GDELT Data ## An Introduction to GDELT Data ### (and How to Work with It and MongoDB) Hey there! There's a good chance that if you're reading this, it's because you're planning to enter the MongoDB "Data as News" Hackathon! If not, well, go ahead and sign up here! Now that that's over with, let's get to the first question you probably have: ### What is GDELT? GDELT is an acronym, standing for "Global Database of Events, Language and Tone". It's a database of geopolitical event data, automatically derived and translated in real time from hundreds of news sources in 65 languages. It's around two terabytes of data, so it's really quite big! Each event contains the following data: Details of the one or more actors - usually countries or political entities. The type of event that has occurred, such as "appeal for judicial cooperation" The positive or negative sentiment perceived towards the event, on a scale of -10 (very negative) to +10 (very positive) An "impact score" on the Goldstein Scale, indicating the theoretical potential impact that type of event will have on the stability of a country. ### But what does it look like? The raw data GDELT provides is hosted as CSV files, zipped and uploaded for every 15 minutes since February 2015. A row in the CSV files contains data that looks a bit like this: | Field Name | Value | |-----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------| | _id | 1037207900 | | Day | 20210401 | | MonthYear | 202104 | | Year | 2021 | | FractionDate | 2021.2493 | | Actor1Code | USA | | Actor1Name | NORTH CAROLINA | | Actor1CountryCode | USA | | IsRootEvent | 1 | | EventCode | 43 | | EventBaseCode | 43 | | EventRootCode | 4 | | QuadClass | 1 | | GoldsteinScale | 2.8 | | NumMentions | 10 | | NumSources | 1 | | NumArticles | 10 | | AvgTone | 1.548672566 | | Actor1Geo_Type | 3 | | Actor1Geo_Fullname | Albemarle, North Carolina, United States | | Actor1Geo_CountryCode | US | | Actor1Geo_ADM1Code | USNC | | Actor1Geo_ADM2Code | NC021 | | Actor1Geo_Lat | 35.6115 | | Actor1Geo_Long | -82.5426 | | Actor1Geo_FeatureID | 1017529 | | Actor2Geo_Type | 0 | | ActionGeo_Type | 3 | | ActionGeo_Fullname | Albemarle, North Carolina, United States | | ActionGeo_CountryCode | US | | ActionGeo_ADM1Code | USNC | | ActionGeo_ADM2Code | NC021 | | ActionGeo_Lat | 35.6115 | | ActionGeo_Long | -82.5426 | | ActionGeo_FeatureID | 1017529 | | DateAdded | 2022-04-01T15:15:00Z | | SourceURL | https://www.dailyadvance.com/news/local/museum-to-host-exhibit-exploring-change-in-rural-us/article_42fd837e-c5cf-5478-aec3-aa6bd53566d8.html | | downloadId | 20220401151500 | This event encodes Actor1 (North Carolina) hosting a visit (Cameo Code 043) … and in this case the details of the visit aren't included - it's an "exhibit exploring change in the Rural US." You can click through the SourceURL link to read further details. Every event looks like this. One or two actors, possibly some "action" detail, and then a verb, encoded using the CAMEO verb encoding. CAMEO is short for "Conflict and Mediation Event Observations", and you can find the full verb listing in this PDF. If you need a more "computer readable" version of the CAMEO verbs, one is hosted here. ### What's So Interesting About an Enormous Table of Geopolitical Data? We think that there are a bunch of different ways to think about the data encoded in the GDELT dataset. Firstly, it's a longitudinal dataset, going back through time. Data in GDELT v2 goes from the present day back to 2015, providing a huge amount of event data for the past 7 years. But the GDELT v1 dataset, which is less rich, goes back until 1979! This gives an unparalleled opportunity to study the patterns and trends of geopolitics for the past 43 years. More than just a historical dataset, however, GDELT is a living dataset, updated every 15 minutes. This means it can also be considered an event system for understanding the world right now. How you use this ability is up to you, but it shouldn't be ignored! GDELT is also a geographical dataset. Each event encodes one or more points of its actors and actions, so the data can be analysed from a GIS standpoint. But more than all of this, GDELT models human interactions at a large scale. The Goldstein (impact) score (GoldsteinScale), and the sentiment score (AvgTone) provide the human impact of the events being encoded. Whether you choose to explore one of the axes above, using ML, or visualisation; whether you choose to use GDELT data on its own, or combine it with another data source; whether you choose to home in on specific events in the recent past; we're sure that you'll discover new understandings of the world around you by analysing the news data it contains. ### How To Work with GDELT? Over the next few weeks we're going to be publishing blog posts, hosting live streams and AMA (ask me anything) sessions to help you with your GDELT and MongoDB journey. In the meantime, you have a couple of options: You can work with our existing GDELT data cluster (containing the entirety of last year's GDELT data), or you can load a subset of the GDELT data into your own cluster. #### Work With Our Hosted GDELT Cluster We currently host the past year's GDELT data in a cluster called GDELT2. You can access it read-only using Compass, or any of the MongoDB drivers, with the following connection string: ``` mongodb+srv://readonly:[email protected]/GDELT?retryWrites=true&w=majority ``` The raw data is contained in a collection called "eventsCSV", and a slightly massaged copy of the data (with Actors and Actions broken down into subdocuments) is contained in a collection called "recentEvents". We're still making changes to this cluster, and plan to load more data in as time goes on (as well as keeping up-to-date with the 15-minute updates to GDELT!), so keep an eye out for updates to this blog post! #### How to Get GDELT into Your Own MongoDB Cluster There's a high likelihood that you can't work with the data in its raw form. For one reason or another you need the data in a different format, or filtered in some way to work with it efficiently. In that case, I highly recommend you follow Adrienne's advice in her GDELT Primer README. In the next few days we'll be publishing a tool to efficiently load the data you want into a MongoDB cluster. In the meantime, read up on GDELT, have a look at the sample data, and find some teammates to build with! ### Further Reading The following documents contain most of the official documentation you'll need for working with GDELT. We've summarized much of it here, but it's always good to check the source, and you'll need the CAMEO encoding listing! GDELT data documentation GDELT Master file CAMEO code guide ### What next? We hope the above gives you some insight into this fascinating dataset. We’ve chosen it as the theme, "Data as News", for this year's MongoDB World Hackathon due to it’s size, longevity, currency and global relevance. If you fancy exploring the GDELT dataset more, as well as learning MongoDB, and competing for some one-of-a-kind prizes, well, go ahead and sign up here to the Hackathon! We’d be glad to have you!
md
{ "tags": [ "MongoDB" ], "pageDescription": "What is the GDELT dataset and how to work with it and MongoDB and participate in the MongoDB World Hackathon '22", "contentType": "Quickstart" }
An Introduction to GDELT Data
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/ruby/mongodb-jruby
created
# A Plan for MongoDB and JRuby ## TLDR MongoDB will continue to support JRuby. ## Background In April 2021, our Ruby team began to discuss the possibility of removing MongoDBs official support for JRuby. At the time, we decided to shelve these discussions and revisit in a year. In March 2022, right on schedule, we began examining metrics and reviewing user feedback around JRuby, as well as evaluating our backlog of items around this runtime. JRuby itself is still actively maintained and used by many Ruby developers, but our own user base tends toward MRI/CRuby or ‘vanilla Ruby’. We primarily looked at telemetry from active MongoDB Atlas clusters, commercial support cases, and a number of other sources, like Stack Overflow question volume, etc. We decided based on the data available that it would be safe to drop support for JRuby from our automated tests, and stop accepting pull requests related to this runtime. We did not expect this decision to be controversial. ## User Feedback As a company that manages numerous open source projects we work in a public space. Our JIRA and GitHub issues are available to peruse. And so it was not very long before a user commented on this work and asked us *not to do this please.* One of the core JRuby maintainers, Charles Nutter, also reached out on the Ruby ticket to discuss this change. Upon opening a pull request to action this decision, the resulting community feedback encouraged us to reconsider this decision. As the goal of any open source project is to bolster adoption and engagement ultimately we chose to reverse course for the time being, especially seeing as JRuby had subsequently tweeted out their upcoming 9.4 release would be compatible with both Rails 7 and Ruby 3.1. Following the JRuby announcement, TruffleRuby 22.1 was released, so it seems the JVM-based Ruby ecosystem is more active than we anticipated. You can see the back and forth on RUBY-2781 and RUBY-2960. ## Decision We decided to reverse our decision around JRuby, quite simply, because the community asked us to. Our decisions should be informed by the open source community - not just the developers who work at MongoDB - and if we are too hasty, or wrong, we would like to be able to hear that without flinching and respond appropriately. So. Though we weren’t at RailsConf 22 this year, know that if your next application is built using JRuby you should be able to count on MongoDB Atlas being ready to host your application’s data.
md
{ "tags": [ "Ruby" ], "pageDescription": "MongoDB supports JRuby", "contentType": "News & Announcements" }
A Plan for MongoDB and JRuby
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/java/spring-java-mongodb-example-app2
created
# Build a MongoDB Spring Boot Java Book Tracker for Beginners ## Introduction Build your first application with Java and Spring! This simple application demonstrates basic CRUD operations via a book app - you can add a book, edit a book, delete a book. Stores the data in MongoDB database. ## Technology * Java * Spring Boot * MongoDB
md
{ "tags": [ "Java", "Spring" ], "pageDescription": "Build an application to track the books you've read with Spring Boot, Java, and MongoDB", "contentType": "Code Example" }
Build a MongoDB Spring Boot Java Book Tracker for Beginners
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/active-active-application-architectures
created
# Active-Active Application Architectures with MongoDB ## Introduction Determining the best database for a modern application to be deployed across multiple data centers requires careful evaluation to accommodate a variety of complex application requirements. The database will be responsible for processing reads and writes in multiple geographies, replicating changes among them, and providing the highest possible availability, consistency, and durability guarantees. But not all technology choices are equal. For example, one database technology might provide a higher guarantee of availability while providing lower data consistency and durability guarantees than another technology. The tradeoffs made by an individual database technology will affect the behavior of the application upon which it is built. Unfortunately, there is limited understanding among many application architects as to the specific tradeoffs made by various modern databases. The popular belief appears to be that if an application must accept writes concurrently in multiple data centers, then it needs to use a multi-master database -where multiple masters are responsible for a single copy or partition of the data. This is a misconception and it is compounded by a limited understanding of the (potentially negative) implications this choice has on application behavior. To provide some clarity on this topic, this post will begin by describing the database capabilities required by modern multi-data center applications. Next, it describes the categories of database architectures used to realize these requirements and summarize the pros and cons of each. Finally, it will look at MongoDB specifically and describe how it fits into these categories. It will list some of the specific capabilities and design choices offered by MongoDB that make it suited for global application deployments. ## Active-Active Requirements When organizations consider deploying applications across multiple data centers (or cloud regions) they typically want to use an active-active architecture. At a high-level, this means deploying an application across multiple data centers where application servers in all data centers are simultaneously processing requests (Figure 1). This architecture aims to achieve a number of objectives: - Serve a globally distributed audience by providing local processing (low latencies) - Maintain always-on availability, even in the face of complete regional outages - Provide the best utilization of platform resources by allowing server resources in multiple data centers to be used in parallel to process application requests. An alternative to an active-active architecture is an active-disaster recovery (also known as active-passive) architecture consisting of a primary data center (region) and one or more disaster recovery (DR) regions (Figure 2). Under normal operating conditions, the primary data center processes requests and the DR center is idle. The DR site only starts processing requests (becomes active), if the primary data center fails. (Under normal situations, data is replicated from primary to DR sites, so that the the DR sites can take over if the primary data center fails). The definition of an active-active architecture is not universally agreed upon. Often, it is also used to describe application architectures that are similar to the active-DR architecture described above, with the distinction being that the failover from primary to DR site is fast (typically a few seconds) and automatic (no human intervention required). In this interpretation, an active-active architecture implies that application downtime is minimal (near zero). A common misconception is that an active-active application architecture requires a multi-master database. This is not only false, but using a multi-master database means relaxing requirements that most data owners hold dear: consistency and data durability. Consistency ensures that reads reflect the results of previous writes. Data durability ensures that committed writes will persist permanently: no data is lost due to the resolution of conflicting writes or node failures. Both these database requirements are essential for building applications that behave in the predictable and deterministic way users expect. To address the multi-master misconception, let's start by looking at the various database architectures that could be used to achieve an active-active application, and the pros and cons of each. Once we have done this, we will drill into MongoDB's architecture and look at how it can be used to deploy an Active-Active application architecture. ## Database Requirements for Active-Active Applications When designing an active-active application architecture, the database tier must meet four architectural requirements (in addition to standard database functionality: powerful query language with rich secondary indexes, low latency access to data, native drivers, comprehensive operational tooling, etc.): 1. **Performance** - low latency reads and writes. It typically means processing reads and writes on nodes in a data center local to the application. 2. **Data durability** - Implemented by replicating writes to multiple nodes so that data persists when system failures occur. 3. **Consistency** - Ensuring that readers see the results of previous writes, readers to various nodes in different regions get the same results, etc. 4. **Availability** - The database must continue to operate when nodes, data centers, or network connections fail. In addition, the recovery from these failures should be as short as possible. A typical requirement is a few seconds. Due to the laws of physics, e.g., the speed of light, it is not possiblefor any database to completely satisfy all these requirements at the same time, so the important consideration for any engineering team building an application is to understand the tradeoffs made by each database and selecting the one that provides for the application's most critical requirements. Let's look at each of these requirements in more detail. ## Performance For performance reasons, it is necessary for application servers in a data center to be able to perform reads and writes to database nodes in the same data center, as most applications require millisecond (a few to tens) response times from databases. Communication among nodes across multiple data centers can make it difficult to achieve performance SLAs. If local reads and write are not possible, then the latency associated with sending queries to remote servers significantly impacts application response time. For example, customers in Australia would not expect to have a far worse user experience than customers in the eastern US where the e-commerce vendors primary data center is located. In addition, the lack of network bandwidth between data centers can also be a limiting factor. ## Data Durability Replication is a critical feature in a distributed database. The database must ensure that writes made to one node are replicated to the other nodes that maintain replicas of the same record, even if these nodes are in different physical locations. The replication speed and data durability guarantees provided will vary among databases, and are influenced by: - The set of nodes that accept writes for a given record - The situations when data loss can occur - Whether conflicting writes (two different writes occurring to the same record in different data centers at about the same time) are allowed, and how they are resolved when they occur ## Consistency The consistency guarantees of a distributed database vary significantly. This variance depends upon a number of factors, including whether indexes are updated atomically with data, the replication mechanisms used, how much information individual nodes have about the status of corresponding records on other nodes, etc. The weakest level of consistency offered by most distributed databases is eventual consistency. It simply guarantees that, eventually, if all writes are stopped, the value for a record across all nodes in the database will eventually coalesce to the same value. It provides few guarantees about whether an individual application process will read the results of its write, or if value read is the latest value for a record. The strongest consistency guarantee that can be provided by distributed databases without severe impact to performance is causal consistency. As described by Wikipedia, causal consistency provides the following guarantees: - **Read Your Writes**: this means that preceding write operations are indicated and reflected by the following read operations. - **Monotonic Reads**: this implies that an up-to-date increasing set of write operations is guaranteed to be indicated by later read operations. - **Writes Follow Reads**: this provides an assurance that write operations follow and come after reads by which they are influenced. - **Monotonic Writes**: this guarantees that write operations must go after other writes that reasonably should precede them. Most distributed databases will provide consistency guarantees between eventual and causal consistency. The closer to causal consistency the more an application will behave as users expect, e.g., queries will return the values of previous writes, data won't appear to be lost, and data values will not change in non-deterministic ways. ## Availability The availability of a database describes how well the database survives the loss of a node, a data center, or network communication. The degree to which the database continues to process reads and writes in the event of different types of failures and the amount of time required to recover from failures will determine its availability. Some architectures will allow reads and writes to nodes isolated from the rest of the database cluster by a network partition, and thus provide a high level of availability. Also, different databases will vary in the amount of time it takes to detect and recover from failures, with some requiring manual operator intervention to restore a healthy database cluster. ## Distributed Database Architectures There are three broad categories of database architectures deployed to meet these requirements: 1. Distributed transactions using two-phase commit 2. Multi-Master, sometimes also called "masterless" 3. Partitioned (sharded) database with multiple primaries each responsible for a unique partition of the data Let's look at each of these options in more detail, as well as the pros and cons of each. ## Distributed Transactions with Two-Phase Commit A distributed transaction approach updates all nodes containing a record as part of a single transaction, instead of having writes being made to one node and then (asynchronously) replicated to other nodes. The transaction guarantees that all nodes will receive the update or the transaction will fail and all nodes will revert back to the previous state if there is any type of failure. A common protocol for implementing this functionality is called a two-phase commit. The two-phase commit protocol ensures durability and multi-node consistency, but it sacrifices performance. The two-phase commit protocol requires two-phases of communication among all the nodes involved in the transaction with requests and acknowledgments sent at each phase of the operation to ensure every node commits the same write at the same time. When database nodes are distributed across multiple data centers this often pushes query latency from the millisecond range to the multi-second range. Most applications, especially those where the clients are users (mobile devices, web browsers, client applications, etc.) find this level of response time unacceptable. ## Multi-Master A multi-master database is a distributed database that allows a record to be updated in one of many possible clustered nodes. (Writes are usually replicated so records exist on multiple nodes and in multiple data centers.) On the surface, a multi-master database seems like the ideal platform to realize an active-active architecture. It enables each application server to read and write to a local copy of the data with no restrictions. It has serious limitations, however, when it comes to data consistency. The challenge is that two (or more) copies of the same record may be updated simultaneously by different sessions in different locations. This leads to two different versions of the same record and the database, or sometimes the application itself, must perform conflict resolution to resolve this inconsistency. Most often, a conflict resolution strategy, such as most recent update wins or the record with the larger number of modifications wins, is used since performance would be significantly impacted if some other more sophisticated resolution strategy was applied. This also means that readers in different data centers may see a different and conflicting value for the same record for the time between the writes being applied and the completion of the conflict resolution mechanism. For example, let's assume we are using a multi-master database as the persistence store for a shopping cart application and this application is deployed in two data centers: East and West. At roughly the same time, a user in San Francisco adds an item to his shopping cart (a flashlight) while an inventory management process in the East data center invalidates a different shopping cart item (game console) for that same user in response to a supplier notification that the release date had been delayed (See times 0 to 1 in Figure 3). At time 1, the shopping cart records in the two data centers are different. The database will use its replication and conflict resolution mechanisms to resolve this inconsistency and eventually one of the two versions of the shopping cart (See time 2 in Figure 3) will be selected. Using the conflict resolution heuristics most often applied by multi-master databases (last update wins or most updated wins), it is impossible for the user or application to predict which version will be selected. In either case, data is lost and unexpected behavior occurs. If the East version is selected, then the user's selection of a flashlight is lost and if the West version is selected, the the game console is still in the cart. Either way, information is lost. Finally, any other process inspecting the shopping cart between times 1 and 2 is going to see non-deterministic behavior as well. For example, a background process that selects the fulfillment warehouse and updates the cart shipping costs would produce results that conflict with the eventual contents of the cart. If the process is running in the West and alternative 1 becomes reality, it would compute the shipping costs for all three items, even though the cart may soon have just one item, the book. The set of uses cases for multi-master databases is limited to the capture of non-mission-critical data, like log data, where the occasional lost record is acceptable. Most use cases cannot tolerate the combination of data loss resulting from throwing away one version of a record during conflict resolution, and inconsistent reads that occur during this process. ## Partitioned (Sharded) Database A partitioned database divides the database into partitions, called shards. Each shard is implemented by a set of servers each of which contains a complete copy of the partition's data. What is key here is that each shard maintains exclusive control of its partition of the data. At any given time, for each shard, one server acts as the primary and the other servers act as secondary replicas. Reads and writes are issued to the primary copy of the data. If the primary server fails for any reason (e.g., hardware failure, network partition) one of the secondary servers is automatically elected to primary. Each record in the database belongs to a specific partition, and is managed by exactly one shard, ensuring that it can only be written to the shard's primary. The mapping of records to shards and the existence of exactly one primary per shard ensures consistency. Since the cluster contains multiple shards, and hence multiple primaries (multiple masters), these primaries may be distributed among the data centers to ensure that writes can occur locally in each datacenter (Figure 4). A sharded database can be used to implement an active-active application architecture by deploying at least as many shards as data centers and placing the primaries for the shards so that each data center has at least one primary (Figure 5). In addition, the shards are configured so that each shard has at least one replica (copy of the data) in each of the datacenters. For example, the diagram in Figure 5 depicts a database architecture distributed across three datacenters: New York (NYC), London (LON), and Sydney (SYD). The cluster has three shards where each shard has three replicas. - The NYC shard has a primary in New York and secondaries in London and Sydney - The LON shard has a primary in London and secondaries in New York and Sydney - The SYD shard has a primary in Sydney and secondaries in New York and London In this way, each data center has secondaries from all the shards so the local app servers can read the entire data set and a primary for one shard so that writes can be made locally as well. The sharded database meets most of the consistency and performance requirements for a majority of use cases. Performance is great because reads and writes happen to local servers. When reading from the primaries, consistency is assured since each record is assigned to exactly one primary. This option requires architecting the application so that users/queries are routed to the data center that manages the data (contains the primary) for the query. Often this is done via geography. For example, if we have two data centers in the United States (New Jersey and Oregon), we might shard the data set by geography (East and West) and route traffic for East Coast users to the New Jersey data center, which contains the primary for the Eastern shard, and route traffic for West Coast users to the Oregon data center, which contains the primary for the Western shard. Let's revisit the shopping cart example using a sharded database. Again, let's assume two data centers: East and West. For this implementation, we would shard (partition) the shopping carts by their shopping card ID plus a data center field identifying the data center in which the shopping cart was created. The partitioning (Figure 6) would ensure that all shopping carts with a DataCenter field value of "East" would be managed by the shard with the primary in the East data center. The other shard would manage carts with the value of "West". In addition, we would need two instances of the inventory management service, one deployed in each data center, with responsibility for updating the carts owned by the local data center. This design assumes that there is some external process routing traffic to the correct data center. When a new cart is created, the user's session will be routed to the geographically closest data center and then assigned a DataCenter value for that data center. For an existing cart, the router can use the cart's DataCenter field to identify the correct data center. From this example, we can see that the sharded database gives us all the benefits of a multi-master database without the complexities that come from data inconsistency. Applications servers can read and write from their local primary, but because each cart is owned by a single primary, no inconsistencies can occur. In contrast, multi-master solutions have the potential for data loss and inconsistent reads. ## Database Architecture Comparison The pros and cons of how well each database architecture meets active-active application requirements is provided in Figure 7. In choosing between multi-master and sharded databases, the decision comes down to whether or not the application can tolerate potentially inconsistent reads and data loss. If the answer is yes, then a multi-master database might be slightly easier to deploy. If the answer is no, then a sharded database is the best option. Since inconsistency and data loss are not acceptable for most applications, a sharded database is usually the best option. ## MongoDB Active-Active Applications MongoDB is an example of a sharded database architecture. In MongoDB, the construct of a primary server and set of secondary servers is called a replica set. Replica sets provide high availability for each shard and a mechanism, called Zone Sharding, is used to configure the set of data managed by each shard. Zone sharding makes it possible to implement the geographical partitioning described in the previous section. The details of how to accomplish this are described in the MongoDB Multi-Data Center Deployments white paper and Zone Sharding documentation, but MongoDB operates as described in the "Partitioned (Sharded) Database" section. Numerous organizations use MongoDB to implement active-active application architectures. For example: - Ebay has codified the use of zone sharding to enable local reads and writes as one of its standard architecture patterns. - YouGov deploys MongoDB for their flagship survey system, called Gryphon, in a "write local, read global" pattern that facilitates active-active multi data center deployments spanning data centers in North America and Europe. - Ogilvy and Maher uses MongoDB as the persistence store for its core auditing application. Their sharded cluster spans three data centers in North America and Europe with active data centers in North American and mainland Europe and a DR data center in London. This architecture minimizes write latency and also supports local reads for centralized analytics and reporting against the entire data set. In addition to the standard sharded database functionality, MongoDB provides fine grain controls for write durability and read consistency that make it ideal for multi-data center deployments. For writes, a write concern can be specified to control write durability. The write concern enables the application to specify the number of replica set members that must apply the write before MongoDB acknowledges the write to the application. By providing a write concern, an application can be sure that when MongoDB acknowledges the write, the servers in one or more remote data centers have also applied the write. This ensures that database changes will not be lost in the event of node or a data center failure. In addition, MongoDB addresses one of the potential downsides of a sharded database: less than 100% write availability. Since there is only one primary for each record, if that primary fails, then there is a period of time when writes to the partition cannot occur. MongoDB combines extremely fast failover times with retryable writes. With retryable writes, MongoDB provides automated support for retrying writes that have failed due to transient system errors such as network failures or primary elections, therefore significantly simplifying application code. The speed of MongoDB's automated failover is another distinguishing feature that makes MongoDB ideally suited for multi-data center deployments. MongoDB is able to failover in 2-5 seconds (depending upon configuration and network reliability), when a node or data center fails or network split occurs. (Note, secondary reads can continue during the failover period.) After a failure occurs, the remaining replica set members will elect a new primary and MongoDB's driver, upon which most applications are built, will automatically identify this new primary. The recovery process is automatic and writes continue after the failover process completes. For reads, MongoDB provides two capabilities for specifying the desired level of consistency. First, when reading from secondaries, an application can specify a maximum staleness value (maxStalenessSeconds). This ensures that the secondary's replication lag from the primary cannot be greater than the specified duration, and thus, guarantees the currentness of the data being returned by the secondary. In addition, a read can also be associated with a ReadConcern to control the consistency of the data returned by the query. For example, a ReadConcern of majority tells MongoDB to only return data that has been replicated to a majority of nodes in the replica set. This ensures that the query is only reading data that will not be lost due to a node or data center failure, and gives the application a consistent view of the data over time. MongoDB 3.6 also introduced causal consistency - guaranteeing that every read operation within a client session will always see the previous write operation, regardless of which replica is serving the request. By enforcing strict, causal ordering of operations within a session, causal consistency ensures every read is always logically consistent, enabling monotonic reads from a distributed system - guarantees that cannot be met by most multi-node databases. Causal consistency allows developers to maintain the benefits of strict data consistency enforced by legacy single node relational databases, while modernizing their infrastructure to take advantage of the scalability and availability benefits of modern distributed data platforms. ## Conclusion In this post we have shown that sharded databases provide the best support for the replication, performance, consistency, and local-write, local-read requirements of active-active applications. The performance of distributed transaction databases is too slow and multi-master databases do not provide the required consistency guarantees. In addition, MongoDB is especially suited for multi-data center deployments due to its distributed architecture, fast failover and ability for applications to specify desired consistency and durability guarantees through Read and Write Concerns. View the MongoDB Architect Hub
md
{ "tags": [ "MongoDB" ], "pageDescription": "This post will begin by describing the database capabilities required by modern multi-data center applications.", "contentType": "Article" }
Active-Active Application Architectures with MongoDB
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/atlas-search-exact-match
created
# Exact Matches in Atlas Search: Beginners Guide ## Contributors Much of this article was contributed by a MongoDB Intern, Humayara Karim. Thanks for spending your summer with us! ## Introduction Search engines are powerful tools that users rely on when they're looking for information. They oftentimes rely on them to handle the misspelling of words through a feature called fuzzy matching. Fuzzy matching identifies text, string, and even queries that are very similar but not the same. This is very useful. But a lot of the time, the search that is most useful is an exact match. I'm looking for a word, `foobar`, and I want `foobar`, not `foobarr` and not `greenfoobart`. Luckily, Atlas Search has solutions for both fuzzy searches as well as exact matches. This tutorial will focus on the different ways users can achieve exact matches as well as the pros and cons of each. In fact, there are quite a few ways to achieve exact matches with Atlas Search. ## (Let us count the) Ways to Exact Match in MongoDB Just like the NYC subway system, there are many ways to get to the same destination, and not all of them are good. So let's talk about the various methods of doing exact match searches, and the pros and cons. ## Atlas Search Index Analyzers These are policies that allow users to define filters for the text matches they are looking for. For example, if you wanted to find an exact match for a string of text, the best analyzer to use would be the **Keyword Analyzer** as this analyzer indexes text fields as single terms by accepting a string or array of strings as a parameter. If you wanted to return exact matches that contain a specific word, the **Standard Analyzer** would be your go-to as it divides texts based on word-boundaries. It's crucial to first identify and understand the appropriate analyzer you will need based on your use case. This is where MongoDB makes our life easier because you can find all the built-in analyzers Atlas Search supports and their purposes all in one place, as shown below: **Pros**: Users can also make custom and multi analyzers to cater to specific application needs. There are examples on the MongoDB Developer Community Forums demonstrating folks doing this in the wild. Here's some code for case insensitive search using a custom analyzer and with the keyword tokenizer and a lowercase token filter: ```"analyzers": { "charFilters": [], "name": "search_keyword_lowercaser", "tokenFilters": [ { "type": "lowercase" } ], "tokenizer": { "type": "keyword" } } ] ``` Or, a lucene.keyword analyzer for single-word exact match queries and phrase query for multi-word exact match queries [here: ``` { $search: { "index": "movies_search_index" "phrase": { "query": "Red Robin", "path": "title" } } } ``` **Cons**: Dealing with case insensitivity search isn’t super straightforward. It's not impossible, of course, but it requires a few extra steps where you would have to define a custom analyzer and run a diacritic-insensitive query. There's a step by step guide on how to do this here. ## The Phrase Operator AKA a "multi-word exact match thing." The Phrase Operator can get exact match queries on multiple words (tokens) in a field. But why use a phrase operator instead of only relying on an analyzer? It’s because the phrase operator searches for an *ordered sequence* of terms with the help of an analyzer defined in the index configuration. Take a look at this example, where we want to search the phrases “the man” and “the moon” in a movie titles collection: ``` db.movies.aggregate( { "$search": { "phrase": { "path": "title", "query": ["the man", "the moon"] } } }, { $limit: 10 }, { $project: { "_id": 0, "title": 1, score: { $meta: "searchScore" } } } ]) ``` As you can see, the query returns all the results the contain ordered sequence terms “the man” and “the moon.” ``` { "title" : "The Man in the Moon", "score" : 4.500046730041504 } { "title" : "Shoot the Moon", "score" : 3.278003215789795 } { "title" : "Kick the Moon", "score" : 3.278003215789795 } { "title" : "The Man", "score" : 2.8860299587249756 } { "title" : "The Moon and Sixpence", "score" : 2.8754563331604004 } { "title" : "The Moon Is Blue", "score" : 2.8754563331604004 } { "title" : "Racing with the Moon", "score" : 2.8754563331604004 } { "title" : "Mountains of the Moon", "score" : 2.8754563331604004 } { "title" : "Man on the Moon", "score" : 2.8754563331604004 } { "title" : "Castaway on the Moon", "score" : 2.8754563331604004 } ``` **Pros:** There are quite a few [field type options you can use with phrase that gives users the flexibility to customize the exact phrases they want to return. **Cons:** The phrase operator isn’t compatible with synonym search. What this means is that even if you have synonyms enabled, there can be a chance where your search results are whole phrases instead of an individual word. However, you can use the compound operator with two should clauses, one with the text query that uses synonyms and another that doesn't, to help go about this issue. Here is a sample code snippet of how to achieve this: ``` compound: { should: { text: { query: "radio tower", path: { "wildcard": "*" }, synonyms: "synonymCollection" } }, { text: { query: "radio tower", path: { "wildcard": "*" } } } ] } } ``` ## Autocomplete Operator There are few things in life that thrill me as much as the [autocomplete. Remember the sheer, wild joy of using that for the first time with Google search? It was just brilliant. It was one of things that made me want to work in technology in the first place. You type, and the machines *know what you're thinking!* And oh yea, it helps me from getting "no search results" repeatedly by guiding me to the correct terminology. Tutorial on how to implement this for yourself is here. **Pros:** Autocomplete is awesome. Faster and more responsive search! **Cons:** There are some limitations with auto-complete. You essentially have to weigh the tradeoffs between *faster* results vs *more relevant* results. There are potential workarounds, of course. You can get your exact match score higher by making your autocompleted fields indexed as a string, querying using compound operators, etc... but yea, those tradeoffs are real. I still think it's preferable over plain search, though. ## Text Operator As the name suggests, this operator allows users to search text. Here is how the syntax for the text operator looks: ``` { $search: { "index": , // optional, defaults to "default" "text": { "query": "", "path": "", "fuzzy": , "score": , "synonyms": "" } } } ``` If you're searching for a *single term* and want to use full text search to do it, this is the operator for you. Simple, effective, no frills. It's simplicity means it's hard to mess up, and you can use it in complex use cases without worrying. You can also layer the text operator with other items. The `text` operator also supports synonyms and score matching as shown here: ``` db.movies.aggregate( { $search: { "text": { "path": "title", "query": "automobile", "synonyms": "transportSynonyms" } } }, { $limit: 10 }, { $project: { "_id": 0, "title": 1, "score": { $meta: "searchScore" } } } ]) ``` ``` db.movies.aggregate([ { $search: { "text": { "query": "Helsinki", "path": "plot" } } }, { $project: { plot: 1, title: 1, score: { $meta: "searchScore" } } } ]) ``` **Pros:** Straightforward, easy to use. **Cons:** The terms in your query are considered individually, so if you want to return a result that contains more than a single word, you have to nest your operators. Not a huge deal, but as a downside, you'll probably have to conduct a little research on the [other operators that fit with your use case. ## Highlighting Although this feature doesn’t necessarily return exact matches like the other features, it's worth *highlighting. (See what I did there?!)* I love this feature. It's super useful. Highlight allows users to visually see exact matches. This option also allows users to visually return search terms in their original context. In your application UI, the highlight feature looks like so: If you’re interested in learning how to build an application like this, here is a step by step tutorial visually showing Atlas Search highlights with JavaScript and HTML. **Pros**: Aesthetically, this feature enhances user search experience because users can easily see what they are searching for in a given text. **Cons**: It can be costly if passages are long because a lot more RAM will be needed to hold the data. In addition, this feature does not work with autocomplete. ## Conclusion Ultimately, there are many ways to achieve exact matches with Atlas Search. Your best approach is to skim through a few of the tutorials in the documentation and take a look at the Atlas search section here in the DevCenter and then tinker with it.
md
{ "tags": [ "Atlas" ], "pageDescription": "This tutorial will focus on the different ways users can achieve exact matches as well as the pros and cons of each.", "contentType": "Article" }
Exact Matches in Atlas Search: Beginners Guide
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/csharp/saving-data-in-unity3d-using-sqlite
created
# Saving Data in Unity3D Using SQLite (Part 4 of the Persistence Comparison Series) Our journey of exploring options given to use when it comes persistence in Unity will in this part lead to databases. More specificaclly: SQLite. SQLite is a C-based database that is used in many areas. It has been around for a long time and also found its way into the Unity world. During this tutorial series, we have seen options like `PlayerPrefs` in Unity, and on the other side, `File` and `BinaryWriter`/`BinaryReader` provided by the underlying .NET framework. Here is an overview of the complete series: - Part 1: PlayerPrefs - Part 2: Files - Part 3: BinaryReader and BinaryWriter - Part 4: SQL *(this tutorial)* - Part 5: Realm Unity SDK *(coming soon)* - Part 6: Comparison of all these options Similar to the previous parts, this tutorial can also be found in our Unity examples repository on the persistence-comparison branch. Each part is sorted into a folder. The three scripts we will be looking at in this tutorial are in the `SQLite` sub folder. But first, let's look at the example game itself and what we have to prepare in Unity before we can jump into the actual coding. ## Example game *Note that if you have worked through any of the other tutorials in this series, you can skip this section since we're using the same example for all parts of the series, so that it's easier to see the differences between the approaches.* The goal of this tutorial series is to show you a quick and easy way to make some first steps in the various ways to persist data in your game. Therefore, the example we'll be using will be as simple as possible in the editor itself so that we can fully focus on the actual code we need to write. A simple capsule in the scene will be used so that we can interact with a game object. We then register clicks on the capsule and persist the hit count. When you open up a clean 3D template, all you need to do is choose `GameObject` -> `3D Object` -> `Capsule`. You can then add scripts to the capsule by activating it in the hierarchy and using `Add Component` in the inspector. The scripts we will add to this capsule showcasing the different methods will all have the same basic structure that can be found in `HitCountExample.cs`. ```cs using UnityEngine; /// /// This script shows the basic structure of all other scripts. /// public class HitCountExample : MonoBehaviour { // Keep count of the clicks. SerializeField] private int hitCount; // 1 private void Start() // 2 { // Read the persisted data and set the initial hit count. hitCount = 0; // 3 } private void OnMouseDown() // 4 { // Increment the hit count on each click and save the data. hitCount++; // 5 } } ``` The first thing we need to add is a counter for the clicks on the capsule (1). Add a `[SerilizeField]` here so that you can observe it while clicking on the capsule in the Unity editor. Whenever the game starts (2), we want to read the current hit count from the persistence and initialize `hitCount` accordingly (3). This is done in the `Start()` method that is called whenever a scene is loaded for each game object this script is attached to. The second part to this is saving changes, which we want to do whenever we register a mouse click. The Unity message for this is `OnMouseDown()` (4). This method gets called every time the `GameObject` that this script is attached to is clicked (with a left mouse click). In this case, we increment the `hitCount` (5) which will eventually be saved by the various options shown in this tutorials series. ## SQLite (See `SqliteExampleSimple.cs` in the repository for the finished version.) Now let's make sure our hit count gets persisted so we can continue playing the next time we start the game. SQLite is not included per default in a new Unity project and is also not available directly via the Unity package manager. We have to install two components to start using it. First, head over to [https://sqlite.org/download.html and choose the `Precompiled Binaries` for your operating system. Unzip it and add the two files—`sqlite3.def` and `sqlite3.dll`—to the `Plugin` folder in your Unity project. Then, open a file explorer in your Unity Hub installation directory, and head to the following sub directory: ``` Unity/Hub/Editor/2021.2.11f1/Editor/Data/MonoBleedingEdge/lib/mono/unity ``` In there, you will find the file `Mono.Data.Sqlite.dll` which also needs to be moved to the `Plugins` folder in your Unity project. The result when going back to the Editor should look like this: Now that the preparations are finished, we want to add our first script to the capsule. Similar to the `HitCountExample.cs`, create a new `C# script` and name it `SqliteExampleSimple`. When opening it, the first thing we want to do is import SQLite by adding `using Mono.Data.Sqlite;` and `using System.Data;` at the top of the file (1). Next we will look at how to save whenever the hit count is changed, which happens during `OnMouseDown()`. First we need to open a connection to the database. This is offered by the SQLite library via the `IDbConnection` class (2) which represents an open connection to the database. Since we will need a connection for loading the data later on again, we will extract opening a database connection into another function and call it `private IDbConnection CreateAndOpenDatabase()` (3). In there, we first define a name for our database file. I'll just call it `MyDatabase` for now. Accordingly, the URI should be `"URI=file:MyDatabase.sqlite"` (4). Then we can create a connection to this database using `new SqliteConnection(dbUri)` (5) and open it with `dbConnection.Open()` (6). ```cs using Mono.Data.Sqlite; // 1 using System.Data; // 1 using UnityEngine; public class SqliteExampleSimple : MonoBehaviour { // Resources: // https://www.mono-project.com/docs/database-access/providers/sqlite/ SerializeField] private int hitCount = 0; void Start() // 13 { // Read all values from the table. IDbConnection dbConnection = CreateAndOpenDatabase(); // 14 IDbCommand dbCommandReadValues = dbConnection.CreateCommand(); // 15 dbCommandReadValues.CommandText = "SELECT * FROM HitCountTableSimple"; // 16 IDataReader dataReader = dbCommandReadValues.ExecuteReader(); // 17 while (dataReader.Read()) // 18 { // The `id` has index 0, our `hits` have the index 1. hitCount = dataReader.GetInt32(1); // 19 } // Remember to always close the connection at the end. dbConnection.Close(); // 20 } private void OnMouseDown() { hitCount++; // Insert hits into the table. IDbConnection dbConnection = CreateAndOpenDatabase(); // 2 IDbCommand dbCommandInsertValue = dbConnection.CreateCommand(); // 9 dbCommandInsertValue.CommandText = "INSERT OR REPLACE INTO HitCountTableSimple (id, hits) VALUES (0, " + hitCount + ")"; // 10 dbCommandInsertValue.ExecuteNonQuery(); // 11 // Remember to always close the connection at the end. dbConnection.Close(); // 12 } private IDbConnection CreateAndOpenDatabase() // 3 { // Open a connection to the database. string dbUri = "URI=file:MyDatabase.sqlite"; // 4 IDbConnection dbConnection = new SqliteConnection(dbUri); // 5 dbConnection.Open(); // 6 // Create a table for the hit count in the database if it does not exist yet. IDbCommand dbCommandCreateTable = dbConnection.CreateCommand(); // 6 dbCommandCreateTable.CommandText = "CREATE TABLE IF NOT EXISTS HitCountTableSimple (id INTEGER PRIMARY KEY, hits INTEGER )"; // 7 dbCommandCreateTable.ExecuteReader(); // 8 return dbConnection; } } ``` Now we can work with this SQLite database. Before we can actually add data to it, though, we need to set up a structure. This means creating and defining tables, which is the way most databases are organized. The following screenshot shows the final state we will create in this example. ![ When accessing or modifying the database, we use `IDbCommand` (6), which represents an SQL statement that can be executed on a database. Let's create a new table and define some columns using the following command (7): ```sql "CREATE TABLE IF NOT EXISTS HitCountTableSimple (id INTEGER PRIMARY KEY, hits INTEGER )" ``` So, what does this statement mean? First, we need to state what we want to do, which is `CREATE TABLE IF NOT EXISTS`. Then, we need to name this table, which will just be the same as the script we are working on right now: `HitCountTableSimple`. Last but not least, we need to define how this new table is supposed to look. This is done by naming all columns as a tuple: `(id INTEGER PRIMARY KEY, hits INTEGER )`. The first one defines a column `id` of type `INTEGER` which is our `PRIMARY KEY`. The second one defines a column `hits` of type `INTEGER`. After assigning this statement as the `CommandText`, we need to call `ExecuteReader()` (8) on `dbCommandCreateTable` to run it. Now back to `OnMouseClicked()`. With the `dbConnection` created, we can now go ahead and define another `IDbCommand` (9) to modify the new table we just created and add some data. This time, the `CommandText` (10) will be: ```sql "INSERT OR REPLACE INTO HitCountTableSimple (id, hits) VALUES (0, " + hitCount + ")" ``` Let's decipher this one too: `INSERT OR REPLACE INTO` adds a new variable to a table or updates it, if it already exists. Next is the table name that we want to insert into, `HitCountTableSimple`. This is followed by a tuple of columns that we would like to change, `(id, hits)`. The statement `VALUES (0, " + hitCount + ")` then defines values that should be inserted, also as a tuple. In this case, we just choose `0` for the key and use whatever the current `hitCount` is as the value. Opposed to creating the table, we execute this command calling `ExecuteNonQuery()` (11) on it. The difference can be defined as follows: > ExecuteReader is used for any result set with multiple rows/columns (e.g., SELECT col1, col2 from sometable). ExecuteNonQuery is typically used for SQL statements without results (e.g., UPDATE, INSERT, etc.). All that's left to do is to properly `Close()` (12) the database. How can we actually verify that this worked out before we continue on to reading the values from the database again? Well, the easiest way would be to just look into the database. There are many tools out there to achieve this. One of the open source options would be https://sqlitebrowser.org/. After downloading and installing it, all you need to do is `File -> Open Database`, and then browse to your Unity project and select the `MyDatabase.sqlite` file. If you then choose the `Table` `HitCountTableSimple`, the result should look something like this: Go ahead and run your game. Click a couple times on the capsule and check the Inspector for the change. When you then go back to the DB browser and click refresh, the same number should appear in the `value` column of the table. The next time we start the game, we want to load this hit count from the database again. We use the `Start()` function (13) since it only needs to be done when the scene loads. As before, we need to get a hold of the database with an `IDbConnection` (14) and create a new `IDbCommand` (15) to read the data. Since there is only one table and one value, it's quite simple for now. We can just read `all data` by using: ```sql "SELECT * FROM HitCountTableSimple" ``` In this case, `SELECT` stands for `read the following values`, followed by a `*` which indicates to read all the data. The keyword `FROM` then specifies the table that should be read from, which is again `HitCountTableSimple`. Finally, we execute this command using `ExecuteReader()` (17) since we expect data back. This data is saved in an `IDataReader`, from the documentation: > Provides a means of reading one or more forward-only streams of result sets obtained by executing a command at a data source, and is implemented by .NET data providers that access relational databases. `IDataReader` addresses its content in an index fashion, where the ordering matches one of the columns in the SQL table. So in our case, `id` has index 0, and `hitCount` has index 1. The way this data is read is row by row. Each time we call `dataReader.Read()` (18), we read another row from the table. Since we know there is only one row in the table, we can just assign the `value` of that row to the `hitCount` using its index 1. The `value` is of type `INTEGER` so we need to use `GetInt32(1)` to read it and specify the index of the field we want to read as a parameter, `id` being `0` and `value` being `1`. As before, in the end, we want to properly `Close()` the database (20). When you restart the game again, you should now see an initial value for `hitCount` that is read from the database. ## Extended example (See `SqliteExampleExtended.cs` in the repository for the finished version.) In the previous section, we looked at the most simple version of a database example you can think of. One table, one row, and only one value we're interested in. Even though a database like SQLite can deal with any kind of complexity, we want to be able to compare it to the previous parts of this tutorial series and will therefore look at the same `Extended example`, using three hit counts instead of one and using modifier keys to identify them: `Shift` and `Control`. Let's start by creating a new script `SqliteExampleExtended.cs` and attach it to the capsule. Copy over the code from `SqliteExampleSimple` and apply the following changes to it. First, defie the three hit counts: ```cs SerializeField] private int hitCountUnmodified = 0; [SerializeField] private int hitCountShift = 0; [SerializeField] private int hitCountControl = 0; ``` Detecting which key is pressed (in addition to the mouse click) can be done using the [`Input` class that is part of the Unity SDK. Calling `Input.GetKey()`, we can check if a certain key was pressed. This has to be done during `Update()` which is the Unity function that is called each frame. The reason for this is stated in the documentation: > Note: Input flags are not reset until Update. You should make all the Input calls in the Update Loop. The key that was pressed needs to be remembered when recieving the `OnMouseDown()` event. Hence, we need to add a private field to save it like so: ```cs private KeyCode modifier = default; ``` Now the `Update()` function can look like this: ```cs private void Update() { // Check if a key was pressed. if (Input.GetKey(KeyCode.LeftShift)) // 1 { // Set the LeftShift key. modifier = KeyCode.LeftShift; // 2 } else if (Input.GetKey(KeyCode.LeftControl)) // 1 { // Set the LeftControl key. modifier = KeyCode.LeftControl; // 2 } else // 3 { // In any other case reset to default and consider it unmodified. modifier = default; // 4 } } ``` First, we check if the `LeftShift` or `LeftControl` key was pressed (1) and if so, save the corresponding `KeyCode` in `modifier`. Note that you can use the `string` name of the key that you are looking for or the more type-safe `KeyCode` enum. In case neither of those two keys were pressed (3), we define this as the `unmodified` state and just set `modifier` back to its `default` (4). Before we continue on to `OnMouseClicked()`, you might ask what changes we need to make in the database structure that is created by `private IDbConnection CreateAndOpenDatabase()`. It turns out we actually don't need to change anything at all. We will just use the `id` introduced in the previous section and save the `KeyCode` (which is an integer) in it. To be able to compare both versions later on, we will change the table name though and call it `HitCountTableExtended`: ```cs dbCommandCreateTable.CommandText = "CREATE TABLE IF NOT EXISTS HitCountTableExtended (id INTEGER PRIMARY KEY, hits INTEGER)"; ``` Now, let's look at how detecting mouse clicks needs to be modified to account for those keys: ```cs private void OnMouseDown() { var hitCount = 0; switch (modifier) // 1 { case KeyCode.LeftShift: // Increment the hit count and set it to PlayerPrefs. hitCount = ++hitCountShift; // 2 break; case KeyCode.LeftControl: // Increment the hit count and set it to PlayerPrefs. hitCount = ++hitCountControl; // 2 break; default: // Increment the hit count and set it to PlayerPrefs. hitCount = ++hitCountUnmodified; // 2 break; } // Insert a value into the table. IDbConnection dbConnection = CreateAndOpenDatabase(); IDbCommand dbCommandInsertValue = dbConnection.CreateCommand(); dbCommandInsertValue.CommandText = "INSERT OR REPLACE INTO HitCountTableExtended (id, hits) VALUES (" + (int)modifier + ", " + hitCount + ")"; dbCommandInsertValue.ExecuteNonQuery(); // Remember to always close the connection at the end. dbConnection.Close(); } ``` First, we need to check which modifier was used in the last frame (1). Depending on this, we increment the corresponding hit count and assign it to the local variable `hitCount` (2). As before, we count any other key than `LeftShift` and `LeftControl` as `unmodified`. Now, all we need to change in the second part of this function is the `id` that we set statically to `0` before and instead use the `KeyCode`. The updated SQL statement should look like this: ```sql "INSERT OR REPLACE INTO HitCountTableExtended (id, hits) VALUES (" + (int)modifier + ", " + hitCount + ")" ``` The `VALUES` tuple now needs to set `(int)modifier` (note that the `enum` needs to be casted to `int`) and `hitCount` as its two values. As before, we can start the game and look at the saving part in action first. Click a couple times until the Inspector shows some numbers for all three hit counts: Now, let's open the DB browser again and this time choose the `HitCountTableExtended` from the drop-down: As you can see, there are three rows, with the `value` being equal to the hit counts you see in the Inspector. In the `id` column, we see the three entries for `KeyCode.None` (0), `KeyCode.LeftShift` (304), and `KeyCode.LeftControl` (306). Finally, let's read those values from the database when restarting the game. ```cs void Start() { // Read all values from the table. IDbConnection dbConnection = CreateAndOpenDatabase(); // 1 IDbCommand dbCommandReadValues = dbConnection.CreateCommand(); // 2 dbCommandReadValues.CommandText = "SELECT * FROM HitCountTableExtended"; // 3 IDataReader dataReader = dbCommandReadValues.ExecuteReader(); // 4 while (dataReader.Read()) // 5 { // The `id` has index 0, our `value` has the index 1. var id = dataReader.GetInt32(0); // 6 var hits = dataReader.GetInt32(1); // 7 if (id == (int)KeyCode.LeftShift) // 8 { hitCountShift = hits; // 9 } else if (id == (int)KeyCode.LeftControl) // 8 { hitCountControl = hits; // 9 } else { hitCountUnmodified = hits; // 9 } } // Remember to always close the connection at the end. dbConnection.Close(); } ``` The first part works basically unchanged by creating a `IDbConnection` (1) and a `IDbCommand` (2) and then reading all rows again with `SELECT *` (3) but this time from `HitCountTableExtended`, finished by actually executing the command with `ExecuteReader()` (4). For the next part, we now need to read each row (5) and then check which `KeyCode` it belongs to. We grab the `id` from index `0` (6) and the `hits` from index `1` (7) as before. Then, we check the `id` against the `KeyCode` (8) and assign it to the corresponding `hitCount` (9). Now restart the game and try it out! ## Conclusion SQLite is one of the options when it comes to persistence. If you've read the previous tutorials, you've noticed that using it might at first seem a bit more complicated than the simple `PlayerPrefs`. You have to learn an additional "language" to be able to communicate with your database. And due to the nature of SQL not being the easiest format to read, it might seem a bit intimidating at first. But the world of databases offers a lot more than can be shown in a short tutorial like this! One of the downsides of plain files or `PlayerPrefs` that we've seen was having data in a structured way—especially when it gets more complicated or relationships between objects should be drawn. We looked at JSON as a way to improve that situation but as soon as we need to change the format and migrate our structure, it gets quite complicated. Encryption is another topic that might be important for you—`PlayerPrefs` and `File` are not safe and can easily be read. Those are just some of the areas a database like SQLite might help you achieve the requirements you have for persisting your data. In the next tutorial, we will look at another database, the Realm Unity SDK, which offers similar advantages to SQLite, while being very easy to use at the same time. Please provide feedback and ask any questions in the Realm Community Forum.
md
{ "tags": [ "C#", "MongoDB", "Unity", "SQL" ], "pageDescription": "Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well. In this tutorial series, we will explore the options given to us by Unity and third-party libraries.", "contentType": "Code Example" }
Saving Data in Unity3D Using SQLite
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/seed-database-with-fake-data
created
# How to Seed a MongoDB Database with Fake Data Have you ever worked on a MongoDB project and needed to seed your database with fake data in order to provide initial values for lookups, demo purposes, proof of concepts, etc.? I'm biased, but I've had to seed a MongoDB database countless times. First of all, what is database seeding? Database seeding is the initial seeding of a database with data. Seeding a database is a process in which an initial set of data is provided to a database when it is being installed. In this post, you will learn how to get a working seed script setup for MongoDB databases using Node.js and faker.js. ## The Code This example code uses a single collection of fake IoT data (that I used to model for my IoT Kitty Litter Box project). However, you can change the shape of your template document to fit the needs of your application. I am using faker.js to create the fake data. Please refer to the documentation if you want to make any changes. You can also adapt this script to seed data into multiple collections or databases, if needed. I am saving my data into a MongoDB Atlas database. It's the easiest way to get a MongoDB database up and running. You'll need to get your MongoDB connection URI before you can run this script. For information on how to connect your application to MongoDB, check out the docs. Alright, now that we have got the setup out of the way, let's jump into the code! ``` js /* mySeedScript.js */ // require the necessary libraries const faker = require("faker"); const MongoClient = require("mongodb").MongoClient; function randomIntFromInterval(min, max) { // min and max included return Math.floor(Math.random() * (max - min + 1) + min); } async function seedDB() { // Connection URL const uri = "YOUR MONGODB ATLAS URI"; const client = new MongoClient(uri, { useNewUrlParser: true, // useUnifiedTopology: true, }); try { await client.connect(); console.log("Connected correctly to server"); const collection = client.db("iot").collection("kitty-litter-time-series"); // The drop() command destroys all data from a collection. // Make sure you run it against proper database and collection. collection.drop(); // make a bunch of time series data let timeSeriesData = ]; for (let i = 0; i < 5000; i++) { const firstName = faker.name.firstName(); const lastName = faker.name.lastName(); let newDay = { timestamp_day: faker.date.past(), cat: faker.random.word(), owner: { email: faker.internet.email(firstName, lastName), firstName, lastName, }, events: [], }; for (let j = 0; j < randomIntFromInterval(1, 6); j++) { let newEvent = { timestamp_event: faker.date.past(), weight: randomIntFromInterval(14,16), } newDay.events.push(newEvent); } timeSeriesData.push(newDay); } collection.insertMany(timeSeriesData); console.log("Database seeded! :)"); client.close(); } catch (err) { console.log(err.stack); } } seedDB(); ``` After running the script above, be sure to check out your database to ensure that your data has been properly seeded. This is what my database looks after running the script above. ![Screenshot showing the seeded data in a MongoDB Atlas cluster. Once your fake seed data is in the MongoDB database, you're done! Congratulations! ## Wrapping Up There are lots of reasons you might want to seed your MongoDB database, and populating a MongoDB database can be easy and fun without requiring any fancy tools or frameworks. We have been able to automate this task by using MongoDB, faker.js, and Node.js. Give it a try and let me know how it works for you! Having issues with seeding your database? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.
md
{ "tags": [ "MongoDB" ], "pageDescription": "Learn how to seed a MongoDB database with fake data.", "contentType": "Tutorial" }
How to Seed a MongoDB Database with Fake Data
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/javascript/react-query-rest-api-realm
created
You need to enable JavaScript to run this app.
md
{ "tags": [ "JavaScript", "React" ], "pageDescription": "Learn how to query a REST API built with MongoDB Atlas App Services, React and Axios", "contentType": "Code Example" }
Build a Simple Website with React, Axios, and a REST API Built with MongoDB Atlas App Services
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/email-password-authentication-app-services
created
# Configure Email/Password Authentication in MongoDB Atlas App Services > **Note:** GraphQL is deprecated. Learn more. One of the things I like the most is building full-stack apps using Node.js, React, and MongoDB. Every time I get a billion-dollar idea, I immediately start building it using this tech stack. No matter what app I’m working on, there are a few features that are common: - Authentication and authorization: login, signup, and access controls. - Basic CRUD (Create, Read, Update, and Delete) operations. - Data analytics. - Web application deployment. And without a doubt, all of them play an essential role in any full-stack application. But still, they take a lot of time and energy to build and are mostly repetitive in nature. Therefore, we are left with significantly less time to build the features that our customers are waiting for. In an ideal scenario, your time as a developer should be spent on implementing features and not reinventing the wheel. With MongoDB Atlas App Services, you don’t have to worry about that. All you have to do is connect your client app to the service you need and you’re ready to rock! Throughout this series, you will learn how to build a full stack web application with MongoDB Atlas App Services, GraphQL, and React. We will be building an expense manager application called Expengo. ## Authentication Implementing authentication in your app usually requires you to create and deploy a server while making sure that emails are unique, passwords are encrypted, and sessions/tokens are managed securely. In this blog, we’ll configure email/password authentication on Atlas App Services. In the subsequent part of this series, we’ll integrate this with our React app. ## MongoDB Atlas App Services authentication providers MongoDB Atlas is a developer data platform integrating a multi-cloud database service with a set of data services. Atlas App Services provide secure serverless backend services and APIs to save you hours of coding. For authentication, you can choose from many different providers such as email/password, API key, Google, Apple, and Facebook. For this tutorial, we’ll use the email/password authentication provider. ## Deploy your free tier Atlas cluster If you haven’t already, deploy a free tier MongoDB Atlas cluster. This will allow us to store and retrieve data from our database deployment. You will be asked to add your IP to the IP access list and create a username/password to access your database. Once a cluster is created, you can create an App Service and link to it. ## Set up your App Service Now, click on the “App Services” tab as highlighted in the image below: There are a variety of templates one can choose from. For this tutorial, we will continue with the “Build your own App” template and click “Next.” Add application information in the next pop-up and click on “Create App Service.” Click on “Close Guides” in the next pop-up screen. Now click on “Authentication” in the side-bar. Then, click on the “Edit” button on the right side of Email/Password in the list of Authentication Providers. Make sure the Provider Enabled toggle is set to On. On this page, we may also configure the user confirmation settings and the password reset settings for our application. For the sake of simplicity of this tutorial, we will choose: 1. User confirmation method: “Automatically confirm users.” 2. Password reset method: “Send a password reset email.” 3. Placeholder password reset URL: http://localhost:3000/resetPassword. > We're not going to implement a password reset functionality in our client application. With that said, the URL you enter here doesn't really matter. If you want to learn how to reset passwords with App Services, check out the dedicated documentation. 4. Click “Save Draft.” Once your Draft has been saved, you will see a blue pop-up at the top, with a “Review Draft & Deploy” button. Click on it and wait for a few moments. You will see a pop-up displaying all the changes you made in this draft. Click on “Deploy” to deploy these changes: You will see a “Deployment was successful” message in green at the top if everything goes fine. Yay! ## Conclusion Please note that all the screenshots were last updated in August 2022. Some UX details may have changed in more recent releases. In the next article of the series, we will learn how we can utilize this email/password authentication provider in our React app.
md
{ "tags": [ "Atlas" ], "pageDescription": "In less than 6 steps, learn how to set up authentication and allow your users to log in and sign up to your app without writing a single line of server code.", "contentType": "Tutorial" }
Configure Email/Password Authentication in MongoDB Atlas App Services
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/keep-mongodb-serverless-costs-low
created
# Keeping Your Costs Down with MongoDB Atlas Serverless Instances The new MongoDB Atlas serverless instance types are pretty great, especially if you\'re running intermittent workloads, such as in production scenarios where the load is periodic and spiky (I used to work for a big sports website that was quiet unless a game was starting) or on test and integration infrastructure where you\'re not running your application all the time. The pricing for serverless instances is actually pretty straightforward: You pay for what you use. Unlike traditional MongoDB Atlas Clusters where you provision a set of servers on a tier that specifies the performance of the cluster, and pay for that tier unless your instance is scaled up or down, with Atlas serverless instances, you pay for the exact queries that you run, and the instance will automatically be scaled up or down as your usage scales up or down. Being able to efficiently query your data is important for scaling your website and keeping your costs low in *any* situation. It's just more visible when you are billed per query. Learning these skills will both save you money *and* take your MongoDB skills to the next level. ## Index your data to keep costs down I'm not going to go into any detail here on what an RPU is, or exactly how billing is calculated, because my colleague Vishal has already written MongoDB Serverless: Billing 101. I recommend checking that out *first*, just to see how Vishal demonstrates the significant impact having the right index can have on the cost of your queries! If you want more information on how to appropriately index your data, there are a bunch of good resources to check out. MongoDB University has a free course, M201: MongoDB Performance. It'll teach you the ins and outs of analyzing your queries and how they make use of indexes, and things to think about when indexing your data. The MongoDB Manual also contains excellent documentation on MongoDB Indexes. You'll want to read it and keep it bookmarked for future reference. It's also worth reading up on how to analyze your queries and try to reduce index scans and collection scans as much as possible. If you index your data correctly, you'll dramatically reduce your serverless costs by reducing the number of documents that need to be scanned to find the data you're accessing and updating. ## Modeling your data Once you've ensured that you know how to efficiently index your data, the next step is to make sure that your schema is designed to be as efficient as possible. For example, if you've migrated your schema directly from a relational database, you might have lots of collections containing shallow documents, and you may be using joins to re-combine this data when you're accessing the data. This isn't an efficient way to use MongoDB. For one thing, if you're doing this, you'll want to internalize our mantra, "data that is accessed together should be stored together." Make use of MongoDB's rich document model to ensure that data can be accessed in a single read operation where possible. In most situations where reads are higher than writes, duplicating data across multiple documents will be much more performant and thus cheaper than storing the data normalized in a separate collection and using the $lookup aggregation stage to query it. The MongoDB blog has a series of posts describing MongoDB Design Patterns, and many of them will help you to model your data in a more efficient manner. I recommend these posts in almost every blog post and talk that I do, so it's definitely worth your time getting to know them. Once again, the MongoDB Manual contains information about data modeling, and we also have a MongoDB University course, M320: Data Modeling. If you really want to store your data efficiently in MongoDB, you should check them out. ## Use the Atlas performance tools MongoDB Atlas also offers built-in tools that monitor your usage of queries and indexes in production. From time to time, it's a good idea to log into the MongoDB Atlas web interface, hit "Browse Collections," and then click the "Performance Advisor" tab to check if we've identified indexes you could create (or drop). ## Monitor your serverless usage It's worth keeping an eye on your serverless instance usage in case a new deployment dramatically spikes your usage of RPUs and WPUs. You can set these up in your Atlas Project Alerts screen. ## Conclusion If there's an overall message in this post, it's that efficient modeling and indexing of your data should be your primary focus if you're looking to use MongoDB Atlas serverless instances to keep your costs low. The great thing is that these are skills you probably already have! Or at least, if you need to learn it, then the skills are transferable to any MongoDB database you might work on in the future.
md
{ "tags": [ "Atlas", "Serverless" ], "pageDescription": "A guide to the things you need to think about when using the new MongoDB Atlas serverless instances to keep your usage costs down.", "contentType": "Article" }
Keeping Your Costs Down with MongoDB Atlas Serverless Instances
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/subscribing-changes-browser-websockets
created
md
{ "tags": [ "MongoDB", "Python" ], "pageDescription": "Subscribe to MongoDB Change Streams via WebSockets using Python and Tornado.", "contentType": "Tutorial" }
Subscribe to MongoDB Change Streams Via WebSockets
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/realm/realm-zero-to-mobile-dev
created
# From Zero to Mobile Developer in 40 Minutes Are you an experienced non-mobile developer interested in writing your first iOS app? Or maybe a mobile developer wondering how to enhance your app or simplify your code using Realm and SwiftUI? I've written a number of tutorials aimed at these cohorts, and I've published a video that attempts to cover everything in just 40 minutes. I start with a brief tour of the anatomy of a mobile app—covering both the backend and frontend components. The bulk of the tutorial is devoted to a hands-on demonstration of building a simple chat app. The app lets users open a chatroom of their choice. Once in a chat room, users can share messages with other members. While the app is simple, it solves some complex distributed data issues, with real-time syncing of data between your backend database and your mobile apps. Realm is also available for other platforms, such as Android, so the same back end and data can be shared with all versions of your app. :youtubeVideo tutorial showing how to build your first mobile iOS/iPhone app using Realm and SwiftUI]{vid=lSp95xkvo1U} You can find all of the code from the tutorial in the [repo. If this tutorial whet your appetite and you'd like to see more (and maybe try it for yourself), then I'm running a more leisurely(?) two-hour workshop at MongoDB World 2022—From 0 to Mobile Developer in 2 Hours with Realm and SwiftUI, where I build an all-new app. There's still time to register for MongoDB World 2002. **Use code AndrewMorgan25 for a 25% discount**.
md
{ "tags": [ "Realm", "Swift", "iOS" ], "pageDescription": "Video showing how to build your first iOS app using SwiftUI and Realm", "contentType": "Quickstart" }
From Zero to Mobile Developer in 40 Minutes
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/php/php113-release
created
# MongoDB PHP Extension 1.13.0 Released The PHP team is happy to announce that version 1.13.0 of the mongodb PHP extension is now available on PECL. Thanks also to our intern for 1.13.0, Tanil Su, who added functionality for server discovery and monitoring! ## Release Highlights `MongoDB\Driver\Manager::\_\_construct() `supports two new URI options: ` srvMaxHosts` and `srvServiceName`. * `srvMaxHosts` may be used with sharded clusters to limit the number of hosts that will be added to a seed list following the initial SRV lookup. * `srvServiceName` may be used with self-managed deployments to customize the default service name (i.e. “mongodb”). This release introduces support for SDAM Monitoring, which applications can use to monitor internal driver behavior for server discovery and monitoring. Similar to the existing command monitoring API, applications can implement the `MongoDB\Driver\Monitoring\SDAMSubscriber ` interface and registering the subscriber globally or for a single Manager using `MongoDB\Driver\Monitoring\addSubscriber() ` or `MongoDB\Driver\Manager::addSubscriber`, respectively. In addition to many new event classes, this feature introduces the ServerDescription and TopologyDescription classes. This release also upgrades our libbson and libmongoc dependencies to 1.21.1. The libmongocrypt dependency has been upgraded to 1.3.2. Note that support for MongoDB 3.4 and earlier has been *removed.* A complete list of resolved issues in this release may be found at: https://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12484&version=32494 ## Documentation Documentation is available on PHP.net: http://php.net/set.mongodb ## Installation You can either download and install the source manually, or you can install the extension with: `pecl install mongodb-1.13.0` or update with: `pecl upgrade mongodb-1.13.0` Windows binaries are available on PECL: http://pecl.php.net/package/mongodb
md
{ "tags": [ "PHP" ], "pageDescription": "Announcing our latest release of the PHP Extension 1.13.0!", "contentType": "News & Announcements" }
MongoDB PHP Extension 1.13.0 Released
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/javascript/next-gen-webapps-remix-atlas-data-api
created
# Next Gen Web Apps with Remix and MongoDB Atlas Data API > Learn more about the GA version here. Javascript-based application stacks have proven themselves to be the dominating architecture for web applications we all use. From *MEAN* to *MERN* and *MEVN*, the idea is to have a JavaScript-based web client and server, communicating through a REST or GraphQL API powered by the document model of MongoDB as its flexible data store. Remix is a new JS framework that comes to disrupt the perception of static websites and tiering the view and the controller. This framework aims to simplify the web component nesting by turning our web components into small microservices that can load, manipulate, and present data based on the specific use case and application state. The idea of combining the view logic with the server business logic and data load, leaving the state management and binding to the framework, makes the development fast and agile. Now, adding a data access layer such as MongoDB Atlas and its new Data API makes building data-driven web applications super simple. No driver is needed and everything happens in a loader function via some https calls. To showcase how easy it is, we have built a demo movie search application based on MongoDB Atlas sample database sample_mflix. In this article, we will cover the main features of this application and learn how to use Atlas Data API and Atlas Search features. > Make sure to check out the live Remix and MongoDB demo application! You can find its source code in this dedicated GitHub repository. ## Setting Up an Atlas Cluster and Data API First we need to prepare our data tier that we will work with our Remix application. Follow these steps: * Get started with Atlas and prepare a cluster to work with. * Enable the Data API and save the API key. * Load a sample data set into the cluster. (This application is using sample\_mflix for its demo.) ## Setting Up Remix Application As other Node frameworks, the easiest way to bootstrap an app is by deploying a template application as a base: ``` shell npx create-remix@latest ``` The command will prompt for several settings. You can use the default ones with the default self hosting option. Let’s also add a few node packages that we’ll be using in our application. Navigate to your newly created project and execute the following command: ``` shell npm install axios dotenv tiny-invariant ``` The application consists of two main files which host the entry point to the demo application with main page html components: `app/root.jsx` and `app/routes/index.jsx`. In the real world, it will probably be the routing to a login or main page. ``` - app - routes - index.jsx - root.jsx ``` In `app/root.jsx`, we have the main building blocks of creating our main page and menu to route us to the different demos. ``` html * Home * Movies Search Demo * Facet Search Demo * GitHub ``` > If you choose to use TypeScript while creating the application, add the navigation menu to `app/routes/index.tsx` instead. Don't forget to import `Link` from `remix`. Main areas are exported in the `app/routes/index.jsx` under the “routes” directory which we will introduce in the following section. This file uses the same logic of a UI representation returned as JSX while loading of data is happening in the loader function. In this case, the loader only provides some static data from the “data” variable. Now, here is where Remix introduces the clever routing in the form of routes directories named after our URL path conventions. For the main demo called “movies,” we created a “movies” route: ``` - routes - movies - $title.jsx - index.jsx ``` The idea is that whenever our application is redirecting to `/movies`, the index.jsx under `routes/movies` is called. Each jsx file produces a React component and loads its data via a loader function (operating as the server backend data provider). Before we can create our main movies page and fetch the movies from the Atlas Data API, let’s create a `.env` file in the main directory to provide the needed Atlas information for our application: ``` DATA_API_KEY= DATA_API_BASE_URL= CLUSTER_NAME= ``` Place the relevant information from your Atlas project locating the API key, the Data API base URL, and the cluster name. Those will be shortly used in our Data API calls. > ⚠️**Important**: `.env` file is good for development purposes. However, for production environments, consider the appropriate secret repository to store this information for your deployment. Let’s load this .env file when the application starts by adjusting the “dev” npm scripts in the `package.json` file: ``` json "dev": "node -r dotenv/config node_modules/.bin/remix dev" ``` ## `movies/index.jsx` File Let's start to create our movies list by rendering it from our data loader and the `sample_mflix.movies` collection structure. Navigate to the ‘app/routes’ directory and execute the following commands to create new routes for our movies list and movie details pages. ```shell cd app/routes mkdir movies touch movies/index.jsx movies/\$title.jsx ``` Then, open the `movies/index.jsx` file in your favorite code editor and add the following: ``` javascript import { Form, Link, useLoaderData , useSearchParams, useSubmit } from "remix"; const axios = require("axios"); export default function Movies() { let searchParams, setSearchParams] = useSearchParams(); let submit = useSubmit(); let movies = useLoaderData(); let totalFound = movies.totalCount; let totalShow = movies.showCount; return ( MOVIES submit(e.currentTarget.form)} id="searchBar" name="searchTerm" placeholder="Search movies..." /> Showing {totalShow} of total {totalFound} movies found {movies.documents.map(movie => ( {movie.title} ))} ); } ``` As you can see in the return clause, we have a title named “Movies,” an input inside a “get” form to post a search input if requested. We will shortly explain how forms are convenient when working with Remix. Additionally, there is a link list of the retrieved movies documents. Using the `` component from Remix allows us to create links to each individual movie name. This will allow us to pass the title as a path parameter and trigger the `$title.jsx` component, which we will build shortly. The data is retrieved using `useLoaderData()` which is a helper function provided by the framework to retrieve data from the server-side “loader” function. ### The Loader Function The interesting part is the `loader()` function. Let's create one to first retrieve the first 100 movie documents and leave the search for later. Add the following code to the `movies/index.jsx` file. ```javascript export let loader = async ({ request }) => { let pipeline = [{ $limit: 100 }]; let data = JSON.stringify({ collection: "movies", database: "sample_mflix", dataSource: process.env.CLUSTER_NAME, pipeline }); let config = { method: 'post', url: `${process.env.DATA_API_BASE_URL}/action/aggregate`, headers: { 'Content-Type': 'application/json', 'Access-Control-Request-Headers': '*', 'apiKey': process.env.DATA_API_KEY }, data }; let movies = await axios(config); let totalFound = await getCountMovies(); return { showCount: movies?.data?.documents?.length, totalCount: totalFound, documents: movies?.data?.documents }; }; const getCountMovies = async (countFilter) => { let pipeline = countFilter ? [{ $match: countFilter }, { $count: 'count' }] : [{ $count: 'count' }]; let data = JSON.stringify({ collection: "movies", database: "sample_mflix", dataSource: process.env.CLUSTER_NAME, pipeline }); let config = { method: 'post', url: `${process.env.DATA_API_BASE_URL}/action/aggregate`, headers: { 'Content-Type': 'application/json', 'Access-Control-Request-Headers': '*', 'apiKey': process.env.DATA_API_KEY }, data }; let result = await axios(config); return result?.data?.documents[0]?.count; } ``` Here we start with an [aggregation pipeline to just limit the first 100 documents for our initial view `pipeline = {$limit : 100}]; `. This pipeline will be passed to our REST API call to the Data API endpoint: ``` javascript let data = JSON.stringify({ collection: "movies", database: "sample_mflix", dataSource: process.env.CLUSTER_NAME, pipeline }); let config = { method: 'post', url: `${process.env.DATA_API_BASE_URL}/action/aggregate`, headers: { 'Content-Type': 'application/json', 'Access-Control-Request-Headers': '*', 'apiKey': process.env.DATA_API_KEY }, data }; let result = await axios(config); ``` We place the API key and the URL from the secrets file we created earlier as environment variables. The results array will be returned to the UI function: ``` javascript return result?.data?.documents[0]?.count; ``` To run the application, we can go into the main folder and execute the following command: ``` shell npm run dev ``` The application should start on `http://localhost:3000` URL. ### Adding a Search Via Atlas Text Search For the full text search capabilities of this demo, you need to create a dynamic [Atlas Search index on database `sample_mflix` collection `movies` (use default dynamic mappings). Require version 4.4.11+ (free tier included) or 5.0.4+ of the Atlas cluster for the search metadata and facet searches we will discuss later. Since we have a ` ` Remix component submitting the form input, data typed into the input box will trigger a data reload. The ` ` reloads the loader function without refreshing the entire page. This will naturally resubmit the URL as `/movies?searchTerm=` and here is why it's easy to use the same loader function, extract to URL parameter, and add a search logic by just amending the base pipeline: ``` javascript let url = new URL(request.url); let searchTerm = url.searchParams.get("searchTerm"); const pipeline = searchTerm ? { $search: { index: 'default', text: { query: searchTerm, path: { 'wildcard': '*' } } } }, { $limit: 100 }, { "$addFields": { meta: "$$SEARCH_META" } } ] : [{ $limit: 100 }]; ``` In this case, the submission of a form will call the loader function again. If there was a `searchTerm`submitted in the URL, it will be extracted under the `searchTerm` variable and create a `$search` pipeline to interact with the Atlas Search text index. ``` javascript text: { query: searchTerm, path: { 'wildcard': '*' } } ``` Additionally, there is a very neat feature that allow us to get the metadata for our search—for example, how many matches were for this specific keyword (as we don’t want to show more than 100 results). ``` javascript { "$addFields" : {meta : "$$SEARCH_META"}} ``` When wiring everything together, we get a working searching functionality, including metadata information on our searches. Now, if you noticed, each movie title is actually a link redirecting to `./movies/` url. But why is this good, you ask? Remix allows us to build parameterized routes based on our URL path parameters. ## `movies/$title.jsx` File The `movies/$title.jsx` file will show each movie's details when loaded. The magic is that the loader function will get the name of the movie from the URL. So, in case we clicked on “Home Alone,” the path will be `http:/localhost:3000/movies/Home+Alone`. This will allow us to fetch the specific information for that title. Open the `movies/$title.jsx` file we created earlier, and add the following: ```javascript import { Link, useLoaderData } from "remix"; import invariant from "tiny-invariant"; const axios = require('axios'); export let loader = async ({ params }) => { invariant(params.title, "expected params.title"); let data = JSON.stringify({ collection: "movies", database: "sample_mflix", dataSource: process.env.CLUSTER_NAME, filter: { title: params.title } }); let config = { method: 'post', url: process.env.DATA_API_BASE_URL + '/action/findOne', headers: { 'Content-Type': 'application/json', 'Access-Control-Request-Headers': '*', 'apiKey': process.env.DATA_API_KEY }, data }; let result = await axios(config); let movie = result?.data?.document || {}; return { title: params.title, plot: movie.fullplot, genres: movie.genres, directors: movie.directors, year: movie.year, image: movie.poster }; }; ``` The `findOne` query will filter the results by title. The title is extracted from the URL params provided as an argument to the loader function. The data is returned as a document with the needed information to be presented like “full plot,” “poster,” “genres,” etc. Let’s show the data with a simple html layout: ``` javascript export default function MovieDetails() { let movie = useLoaderData(); return ( <div> <h1>{movie.title}</h1> {movie.plot} <br></br> <div styles="padding: 25% 0;" class="tooltip"> <li> Year </li> <Link class="tooltiptext" to={"../movies?filter=" + JSON.stringify({ "year": movie.year })}>{movie.year}</Link> </div> <br /> <div styles="padding: 25% 0;" class="tooltip"> <li> Genres </li> <Link class="tooltiptext" to={"../movies?filter=" + JSON.stringify({ "genres": movie.genres })}>{movie.genres.map(genre => { return genre + " | " })}</Link> </div> <br /> <div styles="padding: 25% 0;" class="tooltip"> <li> Directors </li> <Link class="tooltiptext" to={"../movies?filter=" + JSON.stringify({ "directors": movie.directors })}>{movie.directors.map(director => { return director + " | " })}</Link> </div> <br></br> <img src={movie.image}></img> </div> ); } ``` ## `facets/index.jsx` File MongoDB Atlas Search introduced a new feature complementing a very common use case in the text search world: categorising and allowing a [faceted search. Facet search is a technique to present users with possible search criteria and allow them to specify multiple search dimensions. In a simpler example, it's the search criteria panels you see in many commercial or booking websites to help you narrow your search based on different available categories. Additionally, to the different criteria you can have in a facet search, it adds better and much faster counting of different categories. To showcase this ability, we have created a new route called `facets` and added an additional page to show counts per genre under `routes/facets/index.jsx`. Let’s look at its loader function: ``` javascript export let loader = async ({ request }) => { let pipeline = { $searchMeta: { facet: { operator: { range: { path: "year", gte: 1900 } }, facets: { genresFacet: { type: "string", path: "genres" } } } } } ]; let data = JSON.stringify({ collection: "movies", database: "sample_mflix", dataSource: process.env.CLUSTER_NAME, pipeline }); let config = { method: "post", url: process.env.DATA_API_BASE_URL + "/action/aggregate", headers: { "Content-Type": "application/json", "Access-Control-Request-Headers": "*", "apiKey": process.env.DATA_API_KEY }, data }; let movies = await axios(config); return movies?.data?.documents[0]; }; ``` It uses a new stage called $searchMeta and two facet stages: one to make sure that movies start from a date (1900) and that we aggregate counts based on genres field: ``` javascript facet: { operator: { range: { path: "year", gte: 1900 } }, facets: { genresFacet: { type: "string", path: "genres" } } } ``` To use the facet search, we need to amend the index and add both fields to types for facet. Editing the index is easy through the Atlas visual editor. Just click `[...]` > “Edit with visual editor.” ![Facet Mappings An output document of the search query will look like this: ``` json {"count":{"lowerBound":23494}, "facet":{"genresFacet":{"buckets":{"_id":"Drama","count":13771}, {"_id":"Comedy","count":7017}, {"_id":"Romance","count":3663}, {"_id":"Crime","count":2676}, {"_id":"Thriller","count":2655}, {"_id":"Action","count":2532}, {"_id":"Documentary","count":2117}, {"_id":"Adventure","count":2038}, {"_id":"Horror","count":1703}, {"_id":"Biography","count":1401}] }}} ``` Once we route the UI page under facets demo, the table of genres in the UI will look as: ![Facet Search UI ### Adding Clickable Filters Using Routes To make the application even more interactive, we have decided to allow clicking on any of the genres on the facet page and redirect to the movies search page with `movies?filter={genres : <CLICKED-VALUE>}`: ```html <div class="tooltip"> <Link to={ "../movies?filter=" + JSON.stringify({ genres: bucket._id }) } > {bucket._id} </Link> <span class="tooltiptext"> Press to filter by "{bucket._id}" genre </span> </div> ``` Now, every genre clicked on the facet UI will be redirected back to `/movies?filter={generes: <VALUE-BUCKET._id>}`—for example, `/movies?filter={genres : "Drama"}`. This will trigger the `movies/index.jsx` loader function, where we will add the following condition: ```javascript let filter = JSON.parse(url.searchParams.get("filter")); ... else if (filter) { pipeline = { "$match": filter },{$limit : 100} ] } ``` Look how easy it is with the aggregation pipelines to switch between a regular match and a full text search. With the same approach, we can add any of the presented fields as a search criteria—for example, clicking directors on a specific movie details page passing `/movies?filter={directors: [ <values> ]}`. |Click a filtered field (eg. "Directors") |Redirect to filtered movies list | | --- | --- | | ![Sefty Last movie details || ## Wrap Up Remix has some clever and renewed concepts for building React-based web applications. Having server and client code coupled together inside moduled and parameterized by URL JS files makes developing fun and productive. The MongoDB Atlas Data API comes as a great fit to easily access, search, and dice your data with simple REST-like API syntax. Overall, the presented stack reduces the amount of code and files to maintain while delivering best of class UI capabilities. Check out the full code at the following GitHub repo and get started with your new application using MongoDB Atlas today! > > >If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. > >
md
{ "tags": [ "JavaScript", "Atlas" ], "pageDescription": "Remix is a new and exciting javascript web framework. Together with the MongoDB Atlas Data API and Atlas Search it can form powerful web applications. A guided tour will show you how to leverage both technologies together. ", "contentType": "Tutorial" }
Next Gen Web Apps with Remix and MongoDB Atlas Data API
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/rust/rust-mongodb-frameworks
created
# Using Rust Web Development Frameworks with MongoDB ## Introduction So, you've decided to write a Rust application with MongoDB, and you're wondering which of the top web development frameworks to use. Below, we give some suggestions and resources for how to: 1. Use MongoDB with Actix and Rust. 2. Use MongoDB with Rocket.rs and Rust. The TLDR is that any of the popular Rust frameworks can be used with MongoDB, and we have code examples, tutorials, and other resources to guide you. ### Building MongoDB Rust apps with Actix Actix is a powerful and performant web framework for building Rust applications, with a long list of supported features. You can find a working example of using MongoDB with Actix in the `databases` directory under Actix's github, but otherwise, if you're looking to build a REST API with Rust and MongoDB, using Actix along the way, this tutorial is one of the better ones we've seen. ### Building MongoDB Rust apps with Rocket.rs Prefer Rocket? Rocket is a fast, secure, and type safe framework that is low on boilerplate. It's easy to use MongoDB with Rocket to build Rust applications. There's a tutorial on Medium we particularly like on building a REST API with Rust, MongoDB, and Rocket. If all you want is to see a code example on github, we recommend this one.
md
{ "tags": [ "Rust", "MongoDB" ], "pageDescription": "Which Rust frameworks work best with MongoDB?", "contentType": "Article" }
Using Rust Web Development Frameworks with MongoDB
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/connectors/leverage-mongodb-data-kafka-tutorials
created
# Learn How to Leverage MongoDB Data within Kafka with New Tutorials! The MongoDB Connector for Apache Kafka documentation now includes new tutorials! These tutorials introduce you to key concepts behind the connector and by the end, you’ll have an understanding of how to move data between MongoDB and Apache Kafka. The tutorials are as follows: * Explore Change Streams Change streams is a MongoDB server feature that provides change data capture (CDC) capabilities for MongoDB collections. The source connector relies on change streams to move data from MongoDB to a Kafka topic. In this tutorial, you will explore creating a change stream and reading change stream events all through a Python application. * Getting Started with the MongoDB Kafka Source Connector In this tutorial, you will configure a source connector to read data from a MongoDB collection into an Apache Kafka topic and examine the content of the event messages. * Getting Started with the MongoDB Kafka Sink Connector In this tutorial, you will configure a sink connector to copy data from a Kafka topic into a MongoDB cluster and then write a Python application to write data into the topic. * Replicate Data with a Change Data Capture Handler Configure both a MongoDB source and sink connector to replicate data between two collections using the MongoDB CDC handler. * Migrate an Existing Collection to a Time Series Collection Time series collections efficiently store sequences of measurements over a period of time, dramatically increasing the performance of time-based data. In this tutorial, you will configure both a source and sink connector to replicate the data from a collection into a time series collection. These tutorials run locally within a Docker Compose environment that includes Apache Kafka, Kafka Connect, and MongoDB. Before starting them, follow and complete the Tutorial Setup. You will work through the steps using a tutorial shell and containers available on Docker Hub. The tutorial shell includes tools such as the new Mongo shell, KafkaCat, and helper scripts that make it easy to configure Kafka Connect from the command line. If you have any questions or feedback on the tutorials, please post them on the MongoDB Community Forums.
md
{ "tags": [ "Connectors", "Kafka" ], "pageDescription": "MongoDB Documentation has released a series of new tutorials based upon a self-hosted Docker compose environment that includes all the components needed to learn.", "contentType": "Article" }
Learn How to Leverage MongoDB Data within Kafka with New Tutorials!
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/javascript/code-example-nextjs-mongodb
created
# Blogue ## Creator Sujan Chhetri contributed this project. ## About the Project: Blogue is a writing platform where all the writers or non-writers are welcome. We believe in sharing knowlege in the form of words. The page has a 'newshub' for articles, a projecthub where you can share projects among other things. Posts can categorized by health, technology, business, science and more. It's also possible to select the source (CNN / Wired) etc. ## Inspiration I created this, so that I could share stuff (blogs / articles) on my own platform. I am a self taught programmer. I want other students to know that you can make stuffs happen if you make plans and start learning. ## How it Works It's backend is written with Node.js and frontend with next.js and react.js. Mongodb Atlas is used as the storage. MongoDB is smooth and fast. The GitHub repo shared above consists of the backend for a blogging platform. Most of the features that are in a blog are available. Some listed here: * User Signup / Signin * JWT based Authentication System * Role Based Authorization System-user/admin * Blogs Search * Related Blogs * Categories * Tags * User Profile * Blog Author Private Contact Form * Multiple User Authorization System * Social Login with Google * Admin / User Dashboard privilage * Image Uploads * Load More Blogs
md
{ "tags": [ "JavaScript", "Atlas", "Node.js", "Next.js" ], "pageDescription": "A reading and writing platform. ", "contentType": "Code Example" }
Blogue
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/rust/rust-mongodb-blog-project
created
# Beginner Coding Project: Build a Blog Engine with Rust and MongoDB ## Description of Application A quick and easy example application project that creates a demo blog post engine. Very simple UI, ideal for beginners to Rust programming langauge. ## Technology Stack This code example utilizes the following technologies: * MongoDB Atlas * Rust * Rocket.rs framework
md
{ "tags": [ "Rust" ], "pageDescription": "A beginner level project using MongoDB with Rust", "contentType": "Code Example" }
Beginner Coding Project: Build a Blog Engine with Rust and MongoDB
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/build-serverless-applications-sst-mongodb-atlas
created
# How to Build Serverless Applications with SST and MongoDB Atlas Serverless computing is now becoming the standard for how applications are developed in the cloud. Serverless at its core lets developers create small packages of code that are executed by a cloud provider to respond to events. These events can range from HTTP requests to cron jobs, or even file upload notifications. These packages of code are called functions or Lambda functions, named after AWS Lambda, the AWS service that powers them. This model allows serverless services to scale with ease and be incredibly cost effective, as you only pay for the exact number of milliseconds it takes to execute them. However, working with Lambda functions locally can be tricky. You’ll need to either emulate the events locally or redeploy them to the cloud every time you make a change. On the other hand, due to the event-based execution of these functions, you’ll need to use services that support a similar model as well. For instance, a traditional database expects you to hold on to a connection and reuse it to make queries. This doesn’t work well in the serverless model, since Lambda functions are effectively stateless. Every invocation of a Lambda function creates a new environment. The state from all previous invocations is lost unless committed to persistent storage. Over the years, there has been a steady stream of improvements from the community to address these challenges. The developer experience is now at a point where it’s incredibly easy to build full-stack applications with serverless. In this post, we’ll look at the new way for building serverless apps. This includes using: * Serverless Stack \(SST\), a framework for building serverless apps with a great local developer experience. * A serverless database in MongoDB Atlas, the most advanced cloud database service on the market. > “With MongoDB Atlas and SST, it’s now easier than ever to build full-stack serverless applications.” — Andrew Davidson, Vice President, Product Management, MongoDB Let’s look at how using these tools together can make it easy to build serverless applications. ## Developing serverless apps locally Lambda functions are packages of code that are executed in response to cloud events. This makes it a little tricky to work with them locally, since you want them to respond to events that happen in the cloud. You can work around this by emulating these events locally or by redeploying your functions to test them. Both these approaches don’t work well in practice. ### Live Lambda development with SST SST is a framework for building serverless applications that allows developers to work on their Lambda functions locally. It features something called Live Lambda Development that proxies the requests from the cloud to your local machine, executes them locally, and sends the results back. This allows you to work on your functions locally without having to redeploy them or emulate the events. Live Lambda development thus allows you to work on your serverless applications, just as you would with a traditional server-based application. ## Serverless databases Traditional databases operate under the assumption that there is a consistent connection between the application and the database. It also assumes that you’ll be responsible for scaling the database capacity, just as you would with your server-based application. However, in a serverless context, you are simply writing the code and the cloud provider is responsible for scaling and operating the application. You expect your database to behave similarly as well. You also expect it to handle new connections automatically. ### On-demand serverless databases with MongoDB Atlas To address this issue, MongoDB Atlas launched serverless instances, currently available in preview. This allows developers to use MongoDB’s world class developer experience without any setup, maintenance, or tuning. You simply pick the region and you’ll receive an on-demand database endpoint for your application. You can then make queries to your database, just as you normally would, and everything else is taken care of by Atlas. Serverless instances automatically scale up or down depending on your usage, so you never have to worry about provisioning more than you need and you only pay for what you use. ## Get started In this post, we saw a quick overview of the current serverless landscape, and how serverless computing abstracts and automates away many of the lower level infrastructure decisions. So, you can focus on building features that matter to your business! For help building your first serverless application with SST and MongoDB Atlas, check out our tutorial: How to use MongoDB Atlas in your serverless app. ✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace. >"MongoDB Atlas’ serverless instances and SST allow developers to leverage MongoDB’s unparalleled developer experience to build full-stack serverless apps." — Jay V, CEO, SST Also make sure to check out the quick start and join the community forums for more insights on building serverless apps with MongoDB Atlas.
md
{ "tags": [ "Atlas", "Serverless", "AWS" ], "pageDescription": "The developer experience is at a point where it’s easy to build full-stack applications with serverless. In this post, we’ll look at the new way of building serverless apps.", "contentType": "Article" }
How to Build Serverless Applications with SST and MongoDB Atlas
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/python/groupus-app
created
# GroupUs ## Creator Anjay Goel contributed this project. ## About the Project A web app that automates group formation for projects/presentations/assignments etc. It saves you the inconvenience of asking tens of people simply to form a group. Also letting an algorithm do the matching ensures that the groups formed are more optimal and fair. ## Inspiration Inspired by the difficulty and the unnecessary hassle in forming several different groups for different classes especially during the virtual classes. ## Why MongoDB? Used MongoDB because the project required a database able to store and query JSON like documents. ## How It Works The user creates a new request and adds participant names,email-ids, group size, deadline etc. Then the app will send a form to all participants asking them to fill out their preferences. Once all participants have filled their choices (or deadline reached), it will form groups using a algorithm and send emails informing everyone of their respective groups.
md
{ "tags": [ "Python", "MongoDB", "JavaScript" ], "pageDescription": "A web-app that automates group formation for projects/assignments etc..", "contentType": "Code Example" }
GroupUs
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/practical-mongodb-aggregations-book
created
# Introducing a New MongoDB Aggregations Book I'm pleased to announce the publication of my new book, **"Practical MongoDB Aggregations."** The book is available electronically for free for anyone to use at: . This book is intended for developers, architects, data analysts, data engineers, and data scientists. It aims to improve your productivity and effectiveness when building aggregation pipelines and help you understand how to optimise their pipelines. The book is split into two key parts: 1. A set of tips and principles to help you get the most out of aggregations. 2. A bunch of example aggregation pipelines for solving common data manipulation challenges, which you can easily copy and try for yourself. > > >If you have questions, please head to our developer community >website where the MongoDB engineers and >the MongoDB community will help you build your next big idea with >MongoDB. > >
md
{ "tags": [ "MongoDB" ], "pageDescription": "Learn more about our newest book, Practical MongoDB Aggregations, by Paul Done.", "contentType": "News & Announcements" }
Introducing a New MongoDB Aggregations Book
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/javascript/developing-web-application-netlify-serverless-functions-mongodb
created
MONGODB WITH NETLIFY FUNCTIONS
md
{ "tags": [ "JavaScript", "Atlas", "Node.js" ], "pageDescription": "Learn how to build and deploy a web application that leverages MongoDB and Netlify Functions for a serverless experience.", "contentType": "Tutorial" }
Developing a Web Application with Netlify Serverless Functions and MongoDB
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/go/golang-mongodb-code-example
created
# Cinema: Example Go Microservices Application Looking for a code example that has microservices in Go with Docker, Kubernetes and MongoDB? Look no further! Cinema is an example project which demonstrates the use of microservices for a fictional movie theater. The Cinema backend is powered by 4 microservices, all of which happen to be written in Go, using MongoDB for manage the database and Docker to isolate and deploy the ecosystem. Movie Service: Provides information like movie ratings, title, etc. Show Times Service: Provides show times information. Booking Service: Provides booking information. Users Service: Provides movie suggestions for users by communicating with other services. This project is available to clone or fork on github from Manuel Morejón.
md
{ "tags": [ "Go", "MongoDB", "Kubernetes", "Docker" ], "pageDescription": " An easy project using Go, Docker, Kubernetes and MongoDB", "contentType": "Code Example" }
Cinema: Example Go Microservices Application
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/java/java-survey-2022
created
# The 2022 MongoDB Java Developer Survey According to the 2022 Stack Overflow Developer Survey, Java is the sixth most popular programming, scripting, or markup language. 17K of the whopping 53K+ respondents indicated that they use Java - that’s a huge footprint! Today we have more than 132,000 clusters running on Atlas using Java. We’re running our first-ever developer survey specifically for Java developers. We’ll use your survey responses to make changes that matter to you. The survey will take approximately 5-10 minutes to complete. As a way of saying thank you, we’ll be raffling off gift cards (five randomly chosen winners will receive $150). You can access the survey here.
md
{ "tags": [ "Java", "Spring" ], "pageDescription": "MongoDB is conducted a survey for Java developers", "contentType": "News & Announcements" }
The 2022 MongoDB Java Developer Survey
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/flexible-querying-with-atlas-search
created
# Flexible Querying with Atlas Search ## Introduction In this walkthrough, I will show how the flexibility of Atlas Search's inverted indexes are a powerful option versus traditional b-tree indexes when it comes to supporting ad-hoc queries. ## What is flexible querying? Flexible query engines provide the ability to execute a performant query that spans multiple indexes in your data store. This means you can write ad-hoc, dynamically generated queries, where you don't need to know the query, fields, or ordering of fields in advance. Be sure to check out the MongoDB documentation on this subject! It's very rare that MongoDB’s query planner selects a plan that involves multiple indexes. In this tutorial, we’ll walk through a scenario in which this becomes a requirement. ### Your application is in a constant state of evolution Let’s say you have a movie application with documents like: ``` { "title": "Fight Club", "year": 1999, "imdb": { "rating": 8.9, "votes": 1191784, "id": 137523 }, "cast": "Edward Norton", "Brad Pitt" ] } ``` ### Initial product requirements Now for the version 1.0 application, you need to query on title and year, so you first create a compound index via: `db.movies.createIndex( { "title": 1, "year": 1 } )` Then issue the query: `db.movies.find({"title":"Fight Club", "year":1999})` When you run an explain plan, you have a perfect query with a 1:1 documents-examined to documents-returned ratio: ``` { "executionStats": { "executionSuccess": true, "nReturned": 1, "executionTimeMillis": 0, "totalKeysExamined": 1, "totalDocsExamined": 1 } } ``` ### Our query then needs to evolve Now our application requirements have evolved and you need to query on cast and imdb. First you create the index: `db.movies.createIndex( { "cast": 1, "imdb.rating": 1 } )` Then issue the query: `db.movies.find({"cast":"Edward Norton", "imdb.rating":{ $gte:9 } })` Not the greatest documents-examined to documents-returned ratio, but still not terrible: ``` { "executionStats": { "executionSuccess": true, "nReturned": 7, "executionTimeMillis": 0, "totalKeysExamined": 17, "totalDocsExamined": 17 } } ``` ### Now our query evolves again Now, our application requires you issue a new query, which becomes a subset of the original: `db.movies.find({"imdb.rating" : { $gte:9 } })` The query above results in the dreaded **collection scan** despite the previous compound index (cast_imdb.rating) comprising the above query’s key. This is because the "imdb.rating" field is not the index-prefix, and the query contains no filter conditions on the "cast" field." *Note: Collection scans should be avoided because not only do they instruct the cursor to look at every document in the collection which is slow, but it also forces documents out of memory resulting in increased I/O pressure.* Our query plan results as follows: ``` { "executionStats": { "executionSuccess": true, "nReturned": 31, "executionTimeMillis": 26, "totalKeysExamined": 0, "totalDocsExamined": 23532 } } ``` Now you certainly could create a new index composed of just imdb.rating, which would return an index scan for the above query, but that’s three different indexes that the query planner would have to navigate in order to select the most performant response. ## Alternatively: Atlas Search Because Lucene uses a different index data structure ([inverted indexes vs B-tree indexes), it’s purpose-built to run queries that overlap into multiple indexes. Unlike compound indexes, the order of fields in the Atlas Search index definition is not important. Fields can be defined in any order. Therefore, it's not subject to the limitation above where a query that is only on a non-prefix field of a compound index cannot use the index. If you create a single index that maps all of our four fields above (title, year, cast, imdb): ``` { "mappings": { "dynamic": false, "fields": { "title": { "type": "string", "dynamic": false }, "year": { "type": "number", "dynamic": false }, "cast": { "type": "string", "dynamic": false }, "imdb.rating": { "type": "number", "dynamic": false } } } } ``` Then you issue a query that first spans title and year via a must (AND) clause, which is the equivalent of `db.collection.find({"title":"Fight Club", "year":1999})`: ``` { "$search": { "compound": { "must": [{ "text": { "query": "Fight Club", "path": "title" } }, { "range": { "path": "year", "gte": 1999, "lte": 1999 } } ] } } }] ``` The corresponding query planner results: ``` { '$_internalSearchIdLookup': {}, 'executionTimeMillisEstimate': 6, 'nReturned': 0 } ``` Then when you add `imdb` and `cast` to the query, you can still get performant results: ``` [{ "$search": { "compound": { "must": [{ "text": { "query": "Fight", "path": "title" }, { "range": { "path": "year", "gte": 1999, "lte": 1999 }, { "text": { "query": "Edward Norton", "path": "cast" } }, { "range": { "path": "year", "gte": 1999, "lte": 1999 } } ] } } }] ``` The corresponding query planner results: { '$_internalSearchIdLookup': {}, 'executionTimeMillisEstimate': 6, 'nReturned': 0 } ## This isn’t a peculiar scenario Applications evolve as our users’ expectations and requirements do. In order to support your applications' evolving requirements, Standard B-tree indexes simply cannot evolve at the rate that an inverted index can. ### Use cases Here are several examples where Atlas Search's inverted index data structures can come in handy, with links to reference material: - [GraphQL: If your database's entry point is GraphQL, where the queries are defined by the client, then you're a perfect candidate for inverted indexes - Advanced Search: You need to expand the filtering criteria for your searchbar beyond several fields. - Wildcard Search: Searching across fields that match combinations of characters and wildcards. - Ad-Hoc Querying: The need to dynamically generate queries on-demand by our clients. # Resources - Full code walkthrough via a Jupyter Notebook
md
{ "tags": [ "Atlas", "JavaScript", "GraphQL" ], "pageDescription": "Atlas Search provides the ability to execute a performant query that spans multiple indexes in your data store. It's very rare, however, that MongoDB’s query planner selects a plan that involves multiple indexes. We’ll walk through a scenario in which this becomes a requirement. ", "contentType": "Tutorial" }
Flexible Querying with Atlas Search
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/real-time-data-javascript
created
    CONNECTED AS USER $           Latest events:                        OperationDocument KeyFull Document                            
md
{ "tags": [ "MongoDB", "JavaScript", "React" ], "pageDescription": "In many applications nowadays, you want data to be displayed in real-time. Whether an IoT sensor reporting a value, a stock value that you want to track, or a chat application, you will want the data to automatically update your UI. This is possible using MongoDB Change Streams with the Realm Web SDK.", "contentType": "Tutorial" }
Real Time Data in a React JavaScript Front-End with Change Streams
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/javascript/code-example-js-mongodb-magazinemanagement
created
# Magazine Management ## Creator Trinh Van Thuan from Vietnam National University contributed this project. ## About the Project The system manages students' posts in universities. The system allows students and clients to read posts in diverse categories, such as: math, science, social, etc. ## Inspiration Creating an environment for students to communicate and gain knowledge ## Why MongoDB? Since MongoDB provides more flexible way to use functions than MySQL or some other query languages ## How It Works There are 5 roles: admin, manager, coordinator, student, clients. * Admins are in charge of managing accounts. * Managers manage coordinators. * Coordinators manage students' posts. * Students can read their faculty's posts that have been approved. * Clients can read all posts that have been approved.
md
{ "tags": [ "JavaScript", "MongoDB" ], "pageDescription": "A system manage student's post for school or an organization", "contentType": "Code Example" }
Magazine Management
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/time-series-cpp
created
# MongoDB Time Series with C++ Time series data is a set of data points collected at regular intervals. It’s a common use case in many industries such as finance, IoT, and telecommunications. MongoDB provides powerful features for handling time series data, and in this tutorial, we will show you how to build a C++ console application that uses MongoDB to store time series data, related to the Air Quality Index (AQI) for a given location. We will also take a look at MongoDB Charts to visualize the data saved in the time series. Libraries used in this tutorial: 1. MongoDB C Driver version: 1.23.0 2. MongoDB C++ Driver version: 3.7.0 3. cpr library 4. vcpkg 5. Language standard: C++17 This tutorial uses Microsoft Windows 11 and Microsoft Visual Studio 2022 but the code used in this tutorial should work on any operating system and IDE, with minor changes. ## Prerequisites 1. MongoDB Atlas account with a cluster created. 2. Microsoft Visual Studio setup with MongoDB C and C++ Driver installed. Follow the instructions in Getting Started with MongoDB and C++ to install MongoDB C/C++ drivers and set up the dev environment in Visual Studio. 3. Your machine’s IP address is whitelisted. Note: You can add 0.0.0.0/0 as the IP address, which should allow access from any machine. This setting is not recommended for production use. 4. API token is generated using Air Quality Open Data Platform API Token Request Form to fetch AQI for a given location. ## Installation: Libraries Launch powershell/terminal as an administrator and execute commands shared below. Step 1: Install vcpkg. ``` git clone https://github.com/Microsoft/vcpkg.git cd vcpkg ./bootstrap-vcpkg.sh ./vcpkg integrate install ``` Step 2: Install libcpr/cpr. ``` ./vcpkg install cpr:x64-windows ``` This tutorial assumes we are working with x64 architecture. If you are targeting x86, please use this command: ``` ./vcpkg install cpr ``` Note: Below warning (if encountered) can be ignored. ``` # this is heuristically generated, and may not be correct find_package(cpr CONFIG REQUIRED) target_link_libraries(main PRIVATE cpr::cpr) ``` ## Building the application > Source code available here In this tutorial, we will build an Air Quality Index (AQI) monitor that will save the AQI of a given location to a time series collection. The AQI is a measure of the quality of the air in a particular area, with higher numbers indicating worse air quality. The AQI is based on a scale of 0 to 500 and is calculated based on the levels of several pollutants in the air. We are going to build a console application from scratch. Follow the steps on how to set up the development environment in Visual Studio from our previous article Getting Started with MongoDB and C++, under the section “Visual Studio: Setting up the dev environment.” ### Helper functions Once we have set up a Visual Studio solution, let’s start with adding the necessary headers and writing the helper functions. * Make sure to include `` to access methods provided by the *cpr* library. Note: Since we installed the cpr library with vcpkg, it automatically adds the needed include paths and dependencies to Visual Studio. * Get the connection string (URI) to the cluster and create a new environment variable with key as `“MONGODB_URI”` and value as the connection string (URI). It’s a good practice to keep the connection string decoupled from the code. Similarly, save the API token obtained in the Prerequisites section with the key as `“AQICN_TOKEN”`. Navigate to the Solution Explorer panel, right-click on the solution name, and click “Properties.” Go to Configuration Properties > Debugging > Environment to add these environment variables as shown below. * `“getAQI”` function makes use of the *cpr* library to make a call to the REST API, fetching the AQI data. The response to the request is then parsed to get the AQI figure. * `“saveToCollection”` function saves the given AQI figure to the time series collection. Please note that adding the `“timestamp”` key-value pair is mandatory. A missing timestamp will lead to an exception being thrown. Check out different `“timeseries”` Object Fields in Create and Query a Time Series Collection — MongoDB Manual. ``` #pragma once #include #include #include #include #include #include #include #include using namespace std; std::string getEnvironmentVariable(std::string environmentVarKey) { char* pBuffer = nullptr; size_t size = 0; auto key = environmentVarKey.c_str(); // Use the secure version of getenv, ie. _dupenv_s to fetch environment variable. if (_dupenv_s(&pBuffer, &size, key) == 0 && pBuffer != nullptr) { std::string environmentVarValue(pBuffer); free(pBuffer); return environmentVarValue; } else { return ""; } } int getAQI(std::string city, std::string apiToken) { // Call the API to get the air quality index. std::string aqiUrl = "https://api.waqi.info/feed/" + city + "/?token=" + apiToken; auto aqicnResponse = cpr::Get(cpr::Url{ aqiUrl }); // Get the AQI from the response if(aqicnResponse.text.empty()) { cout << "Error: Response is empty." << endl; return -1; } bsoncxx::document::value aqicnResponseBson = bsoncxx::from_json(aqicnResponse.text); auto aqi = aqicnResponseBson.view()"data"]["aqi"].get_int32().value; return aqi; } void saveToCollection(mongocxx::collection& collection, int aqi) { auto timeStamp = bsoncxx::types::b_date(std::chrono::system_clock::now()); bsoncxx::builder::stream::document aqiDoc = bsoncxx::builder::stream::document{}; aqiDoc << "timestamp" << timeStamp << "aqi" << aqi; collection.insert_one(aqiDoc.view()); // Log to the console window. cout << " TimeStamp: " << timeStamp << " AQI: " << aqi << endl; } ``` ### The main() function With all the helper functions in place, let’s write the main function that will drive this application. * The main function creates/gets the time series collection by specifying the `“collection_options”` to the `“create_collection”` method. Note: MongoDB creates collections implicitly when you first reference the collection in a command, however a time series collection needs to be created explicitly with “[create_collection”. * Every 30 minutes, the program gets the AQI figure and updates it into the time series collection. Feel free to modify the time interval as per your liking by changing the value passed to `“sleep_for”`. ``` int main() { // Get the required parameters from environment variable. auto mongoURIStr = getEnvironmentVariable("MONGODB_URI"); auto apiToken = getEnvironmentVariable("AQICN_TOKEN"); std::string city = "Delhi"; static const mongocxx::uri mongoURI = mongocxx::uri{ mongoURIStr }; if (mongoURI.to_string().empty() || apiToken.empty()) { cout << "Invalid URI or API token. Please check the environment variables." << endl; return 0; } // Create an instance. mongocxx::instance inst{}; mongocxx::options::client client_options; auto api = mongocxx::options::server_api{ mongocxx::options::server_api::version::k_version_1 }; client_options.server_api_opts(api); mongocxx::client conn{ mongoURI, client_options }; // Setup Database and Collection. const string dbName = "AQIMonitor"; const string timeSeriesCollectionName = "AQIMonitorCollection"; // Setup Time Series collection options. bsoncxx::builder::document timeSeriesCollectionOptions = { "timeseries", { "timeField", "timestamp", "granularity", "minutes" } }; auto aqiMonitorDB = conndbName]; auto aqiMonitorCollection = aqiMonitorDB.has_collection(timeSeriesCollectionName) ? aqiMonitorDB[timeSeriesCollectionName] : aqiMonitorDB.create_collection(timeSeriesCollectionName, timeSeriesCollectionOptions.view().get_document().value); // Fetch and update AQI every 30 minutes. while (true) { auto aqi = getAQI(city, apiToken); saveToCollection(aqiMonitorCollection, aqi); std::this_thread::sleep_for(std::chrono::minutes(30)); } return 0; } ``` When this application is executed, you can see the below activity in the console window. ![AQI Monitor application in C++ with MongoDB time series You can also see the time series collection in Atlas reflecting any change made via the console application. ## Visualizing the data with MongoDB Charts We can make use of MongoDB Charts to visualize the AQI data and run aggregation on top of it. Step 1: Go to MongoDB Charts and click on “Add Dashboard” to create a new dashboard — name it “AQI Monitor”. Step 2: Click on “Add Chart”. Step 3: In the “Select Data Source” dialog, go to the “Project” tab and navigate to the time series collection created by our code. Step 4: Change the chart type to “Continuous Line”. We will use this chart to display the AQI trends over time. Step 5: Drag and drop the “timestamp” and “aqi” fields into the X axis and Y axis respectively. You can customize the look and feel (like labels, color, and data format) in the “Customize” tab. Click “Save and close” to save the chart. Step 6: Let’s add another chart to display the maximum AQI — click on “Add Chart” and select the same data source as before. Step 7: Change the chart type to “Number”. Step 8: Drag and drop the “aqi” field into “Aggregation” and change Aggregate to “MAX”. Step 9: We can customize the chart text to change color based on the AQI values. Let’s make the text green if AQI is less than or equal to 100, and red otherwise. We can perform this action with the conditional formatting option under Customize tab. Step 10: Similarly, we can add charts for minimum and average AQI. The dashboard should finally look something like this: Tip: Change the dashboard’s auto refresh settings from “Refresh settings” button to choose a refresh time interval of your choice for the charts. ## Conclusion With this article, we covered creating an application in C++ that writes data to a MongoDB time series collection, and used it further to create a MongoDB Charts dashboard to visualize the data in a meaningful way. The application can be further expanded to save other parameters like PM2.5 and temperature. Now that you've learned how to create an application using the MongoDB C++ driver and MongoDB time series, put your new skills to the test by building your own unique application. Share your creation with the community and let us know how it turned out!
md
{ "tags": [ "MongoDB", "C++" ], "pageDescription": "In this tutorial, we will show you how to build a C++ console application that uses MongoDB to store time series data, related to the Air Quality Index (AQI) for a given location. ", "contentType": "Tutorial" }
MongoDB Time Series with C++
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/atlas-search-multi-language-data-modeling
created
# Atlas Search Multi-Language Data Modeling We live in an increasingly globalized economy. By extension, users have expectations that our applications will understand the context of their culture and by extension: language. Luckily, most search engines—including, Atlas Search—support multiple languages. This article will walk through three options of query patterns, data models, and index definitions to support your various multilingual application needs. To illustrate the options, we will create a fictitious scenario. We manage a recipe search application that supports three cultures, and by extension, languages: English, Japanese (Kuromoji), and German. Our users are located around the globe and need to search for recipes in their native language. ## 1. Single field We have one document for each language in the same collection, and thus each field is indexed separately as its own language. This simplifies the query patterns and UX at the expense of bloated index storage. **Document:** ``` [ {"name":"すし"}, {"name":"Fish and Chips"}, {"name":"Käsespätzle"} ] ``` **Index:** ``` { "name":"recipes", "mappings": { "dynamic": false, "fields": { "name": { "type": "string", "analyzer": "lucene.kuromoji" }, "name": { "type": "string", "analyzer": "lucene.english" }, "name": { "type": "string", "analyzer": "lucene.german" } } } } ``` **Query:** ``` { "$search": { "index": "recipes", "text": { "query": "Fish and Chips", "path": "name" } } } ``` **Pros:** * One single index definition. * Don’t need to specify index name or path based on user’s language. * Can support multiple languages in a single query. **Cons:** * As more fields get added, the index definition needs to change. * Index definition payload is potentially quite large (static field mapping per language). * Indexing fields as irrelevant languages causing larger index size than necessary. ## 2. Multiple collections We have one collection and index per language, which allows us to isolate the different recipe languages. This could be useful if we have more recipes in some languages than others at the expense of lots of collections and indexes. **Documents:** ``` recipes_jp: [{"name":"すし"}] recipes_en: [{"name":"Fish and Chips"}] recipes_de: [{"name":"Käsespätzle"}] ``` **Index:** ``` { "name":"recipes_jp", "mappings": { "dynamic": false, "fields": { "name": { "type": "string", "analyzer": "lucene.kuromoji" } } } } { "name":"recipes_en", "mappings": { "dynamic": false, "fields": { "name": { "type": "string", "analyzer": "lucene.english" } } } } { "name":"recipes_de", "mappings": { "dynamic": false, "fields": { "name": { "type": "string", "analyzer": "lucene.german" } } } } ``` **Query:** ``` { "$search": { "index": "recipes_jp" "text": { "query": "すし", "path": "name" } } } ``` **Pros:** * Can copy the same index definition for each collection (replacing the language). * Isolate different language documents. **Cons:** * Developers have to provide the language name in the index path in advance. * Need to potentially copy documents between collections on update. * Each index is a change stream cursor, so could be expensive to maintain. ## 3. Multiple fields By embedding each language in a parent field, we can co-locate the translations of each recipe in each document. **Document:** ``` { "name": { "en":"Fish and Chips", "jp":"すし", "de":"Käsespätzle" } } ``` **Index:** ``` { "name":"multi_language_names",   "mappings": {     "dynamic": false,     "fields": {       "name": {         "fields": {           "de": {             "analyzer": "lucene.german",             "type": "string"           },           "en": {             "analyzer": "lucene.english",             "type": "string"           },           "jp": {             "analyzer": "lucene.kuromoji",             "type": "string"           }         },         "type": "document"       }     }   } } ``` **Query:** ``` { "$search": { "index": "multi_language_names" "text": { "query": "Fish and Chips", "path": "name.en" } } } ``` **Pros:** * Easier to manage documents. * Index definition is sparse. **Cons:** * Index definition payload is potentially quite large (static field mapping per language). * More complex query and UX.
md
{ "tags": [ "Atlas" ], "pageDescription": "This article will walk through three options of query patterns, data models, and index definitions to support your various multilingual application needs. ", "contentType": "Tutorial" }
Atlas Search Multi-Language Data Modeling
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/search-engine-using-atlas-full-text-search
created
# Tutorial: Build a Movie Search Engine Using Atlas Full-Text Search in 10 Minutes >This article is out of date. Check out this new post for the most up-to-date way to MongoDB Atlas Search to find your favorite movies. 📽 🎞 > > Giving your users the ability to find exactly what they are looking for in your application is critical for a fantastic user experience. With the new MongoDB Atlas Full-Text Search service, we have made it easier than ever to integrate simple yet sophisticated search capabilities into your MongoDB applications. To demonstrate just how easy it is, let's build a movie search engine - in only 10 minutes. Built on Apache Lucene, Full-Text Search adds document data to a full-text search index to make that data searchable in a highly performant, scalable manner. This tutorial will guide you through how to build a web application to search for movies based on a topic using Atlas' sample movie data collection on a free tier cluster. We will create a Full-Text Search index on that sample data. Then we will query on this index to filter, rank and sort through those movies to quickly surface movies by topic. Armed with a basic knowledge of HTML and Javascript, here are the tasks we will accomplish: * ⬜ Spin up an Atlas cluster and load sample movie data * ⬜ Create a Full-Text Search index in movie data collection * ⬜ Write an aggregation pipeline with $searchBeta operator * ⬜ Create a RESTful API to access data * ⬜ Call from the front end Now break out the popcorn, and get ready to find that movie that has been sitting on the tip of your tongue for weeks. To **Get Started**, we will need: 1. A free tier (M0) cluster on MongoDB Atlas. Click here to sign up for an account and deploy your free cluster on your preferred cloud provider and region. 2. The Atlas sample dataset loaded into your cluster. You can load the sample dataset by clicking the ellipse button and **Load Sample Dataset**. > For more detailed information on how to spin up a cluster, configure your IP address, create a user, and load sample data, check out Getting Started with MongoDB Atlas from our documentation. 3. (Optional) MongoDB Compass. This is the free GUI for MongoDB that allows you to make smarter decisions about document structure, querying, indexing, document validation, and more. The latest version can be found here . Once your sample dataset is loaded into your database, let's have a closer look to see what we are working within the Atlas Data Explorer. In your Atlas UI, click on **Collections** to examine the `movies` collection in the new `sample_mflix` database. This collection has over 23k movie documents with information such as title, plot, and cast. * ✅ Spin up an Atlas cluster and load sample movie data * ⬜ Create a Full-Text Search index in movie data collection * ⬜ Write an aggregation pipeline with $searchBeta operator * ⬜ Create a RESTful API to access data * ⬜ Call from the front end ## Create a Full-Text Search Index Our movie search engine is going to look for movies based on a topic. We will use Full-Text Search to query for specific words and phrases in the 'fullplot' field. The first thing we need is a Full-Text Search index. Click on the tab titled SearchBETA under **Collections**. Clicking on the green **Create a Search Index** button will open a dialog that looks like this: By default, we dynamically map all the text fields in your collection. This suits MongoDB's flexible data model perfectly. As you add new data to your collection and your schema evolves, dynamic mapping accommodates those changes in your schema and adds that new data to the Full-Text Search index automatically. Let's accept the default settings and click **Create Index**. *And that's all you need to do to start taking advantage of Lucene in your MongoDB Atlas data!* * ✅ Spin up an Atlas cluster and load sample movie data * ✅ Create a Full-Text Search index in movie data collection * ⬜ Write an aggregation pipeline with $searchBeta operator * ⬜ Create a RESTful API to access data * ⬜ Call from the front end ## Write Aggregation Pipeline With $searchbeta Operator Full-Text Search queries take the form of an aggregation pipeline stage. The **$searchBeta** stage performs a search query on the specified field(s) covered by the Full-Text Search index and must be used as the first stage in the aggregation pipeline. Let's use MongoDB Compass to see an aggregation pipeline that makes use of this Full-Text Search index. For instructions on how to connect your Atlas cluster to MongoDB Compass, click here. *You do not have to use Compass for this stage, but I really love the easy-to-use UI Compass has to offer. Plus the ability to preview the results by stage makes troubleshooting a snap! For more on Compass' Aggregation Pipeline Builder, check out this* blog*.* Navigate to the Aggregations tab in the `sample_mflix.movies` collection: ### Stage 1. $searchBeta For the first stage, select the **$searchBeta** aggregation operator to search for the terms 'werewolves and vampires' in the `fullplot` field. Using the **highlight** option will return the highlights by adding fields to the result that display search terms in their original context, along with the adjacent text content. (More on this later.) >Note the returned movie documents in the preview panel on the right. If no documents are in the panel, double-check the formatting in your aggregation code. ### Stage 2: $project We use `$project` to get back only the fields we will use in our movie search application. We also use the `$meta` operator to surface each document's **searchScore** and **searchHighlights** in the result set. Let's break down the individual pieces in this stage further: **SCORE:** The `"$meta": "searchScore"` contains the assigned score for the document based on relevance. This signifies how well this movie's `fullplot` field matches the query terms 'werewolves and vampires' above. Note that by scrolling in the right preview panel, the movie documents are returned with the score in descending order so that the best matches are provided first. **HIGHLIGHT:** The **"$meta": "searchHighlights"** contains the highlighted results. *Because* **searchHighlights** *and* **searchScore** *are not part of the original document, it is necessary to use a $project pipeline stage to add them to the query output.* Now open a document's **highlight** array to show the data objects with text **values** and **types**. ``` bash title:"The Mortal Instruments: City of Bones" fullplot:"Set in contemporary New York City, a seemingly ordinary teenager, Clar..." year:2013 score:6.849891185760498 highlight:Array 0:Object path:"fullplot" texts:Array 0:Object value:"After the disappearance of her mother, Clary must join forces with a g..." type:"text" 1:Object value:"vampires" type:"hit" 2:Object 3:Object 4:Object 5:Object 6:Object score:3.556248188018799 ``` **highlight.texts.value** - text from the `fullplot` field, which returned a match. **highlight.texts.type** - either a hit or a text. A hit is a match for the query, whereas a **text** is text content adjacent to the matching string. We will use these later in our application code. ### Stage 3: $limit Remember the results are returned with the scores in descending order, so $limit: 10 will bring the 10 most relevant movie documents to your search query. Finally, if you see results in the right preview panel, your aggregation pipeline is working properly! Let's grab that aggregation code with Compass' Export Pipeline to Language feature by clicking the button in the top toolbar. Your final aggregation code will be this: ``` bash { $searchBeta: { search: { query: 'werewolves and vampires', path: 'fullplot' }, highlight: { path: 'fullplot' } }}, { $project: { title: 1, _id: 0, year: 1, fullplot: 1, score: { $meta: 'searchScore' }, highlight: { $meta: 'searchHighlights' } }}, { $limit: 10 } ] ``` * ✅ Spin up an Atlas cluster and load sample movie data * ✅ Create a Full-Text Search index in movie data collection * ✅ Write an aggregation pipeline with $searchBeta operator * ⬜ Create a RESTful API to access data * ⬜ Call from the front end ## Create a REST API Now that we have the heart of our movie search engine in the form of an aggregation pipeline, how will we use it in an application? There are lots of ways to do this, but I found the easiest was to simply create a RESTful API to expose this data - and for that, I used MongoDB Stitch's HTTP Service. [Stitch is MongoDB's serverless platform where functions written in Javascript automatically scale to meet current demand. To create a Stitch application, return to your Atlas UI and click **Stitch** under SERVICES on the left menu, then click the green **Create New Application** button. Name your Stitch application FTSDemo and make sure to link to your M0 cluster. All other default settings are fine: Now click the **3rd Party Services** menu on the left and then **Add a Service**. Select the HTTP service and name it **movies**: Click the green **Add a Service** button, and you'll be directed to add an incoming webhook. Once in the **Settings** tab, enable **Respond with Result**, set the HTTP Method to **GET**, and to make things simple, let's just run the webhook as the System and skip validation. In this service function editor, replace the example code with the following: ``` javascript exports = function(payload) { const collection = context.services.get("mongodb-atlas").db("sample_mflix").collection("movies"); let arg = payload.query.arg; return collection.aggregate( { $searchBeta: { search: { query: arg, path:'fullplot', }, highlight: { path: 'fullplot' } }}, { $project: { title: 1, _id:0, year:1, fullplot:1, score: { $meta: 'searchScore'}, highlight: {$meta: 'searchHighlights'} }}, { $limit: 10} ]).toArray(); }; ``` Let's break down some of these components. MongoDB Stitch interacts with your Atlas movies collection through the global **context** variable. In the service function, we use that context variable to access the sample_mflix.movies collection in your Atlas cluster: ``` javascript const collection = context.services.get("mongodb-atlas").db("sample_mflix").collection("movies"); ``` We capture the query argument from the payload: ``` javascript let arg = payload.query.arg; ``` Return the aggregation code executed on the collection by pasting your aggregation into the code below: ``` javascript return collection.aggregate(<>).toArray(); ``` Finally, after pasting the aggregation code, we changed the terms 'werewolves and vampires' to the generic arg to match the function's payload query argument - otherwise our movie search engine capabilities will be extremely limited. Now you can test in the Console below the editor by changing the argument from **arg1: "hello"** to **arg: "werewolves and vampires"**. >Note: Please be sure to change BOTH the field name **arg1** to **arg**, as well as the string value **"hello"** to **"werewolves and vampires"** - or it won't work. Click **Run** to verify the result: If this is working, congrats! We are almost done! Make sure to **SAVE** and deploy the service by clicking **REVIEW & DEPLOY CHANGES** at the top of the screen. ### Use the API The beauty of a REST API is that it can be called from just about anywhere. Let's execute it in our browser. However, if you have tools like Postman installed, feel free to try that as well. Switch to the **Settings** tab of the **movies** service in Stitch and you'll notice a Webhook URL has been generated. Click the **COPY** button and paste the URL into your browser. Then append the following to the end of your URL: **?arg='werewolves and vampires'** If you receive an output like what we have above, congratulations! You have successfully created a movie search API! * ✅ Spin up an Atlas cluster and load sample movie data * ✅ Create a Full-Text Search index in movie data collection * ✅ Write an aggregation pipeline with $searchBeta operator * ✅ Create a RESTful API to access data * ⬜ Call from the front end ## Finally! - The Front End From the front end application, it takes a single call from the Fetch API to retrieve this data. Download the following [index.html file and open it in your browser. You will see a simple search bar: Entering data in the search bar will bring you movie search results because the application is currently pointing to an existing API. Now open the HTML file with your favorite text editor and familiarize yourself with the contents. You'll note this contains a very simple container and two javascript functions: - Line 82 - **userAction()** will execute when the user enters a search. If there is valid input in the search box and no errors, we will call the **buildMovieList()** function. - Line 125 - **buildMovieList()** is a helper function for **userAction()** which will build out the list of movies, along with their scores and highlights from the 'fullplot' field. Notice in line 146 that if the highlight.texts.type === "hit" we highlight the highlight.texts.value with the tag. ### Modify the Front End Code to Use Your API In the **userAction()** function, we take the input from the search form field in line 79 and set it equal to **searchString**. Notice on line 82 that the **webhook_url** is already set to a RESTful API I created in my own FTSDemo application. In this application, we append that **searchString** input to the **webhook_url** before calling it in the fetch API in line 111. To make this application fully your own, simply replace the existing **webhook_url** value on line 82 with your own API from the **movies** Stitch HTTP Service you just created. 🤞 Now save these changes, and open the **index.html** file once more in your browser, et voilà! You have just built your movie search engine using Full-Text search indexes. 🙌 What kind of movie do you want to watch?! ## That's a Wrap! Now that you have just seen how easy it is to build a simple, powerful search into an application with MongoDB Atlas Full-Text Search, go ahead and experiment with other more advanced features, such as type-ahead or fuzzy matching, for your fine-grained searches. Check out our $searchBeta documentation for other possibilities. Harnessing the power of Apache Lucene for efficient search algorithms, static and dynamic field mapping for flexible, scalable indexing, all while using the same MongoDB Query Language (MQL) you already know and love, spoken in our very best Liam Neeson impression MongoDB now has a very particular set of skills. Skills we have acquired over a very long career. Skills that make MongoDB a DREAM for developers like you.
md
{ "tags": [ "Atlas" ], "pageDescription": "Build a Movie Search Engine Using Full Text Search Indexes on MongoDB Atlas.", "contentType": "Tutorial" }
Tutorial: Build a Movie Search Engine Using Atlas Full-Text Search in 10 Minutes
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/javascript/satellite-code-example-mongodb
created
# EnSat ## Creators Ashish Adhikari, Awan Shrestha, Sabil Shrestha and Sansrit Paudel from Kathmandu University in Nepal contributed this project. ## About the project EnSat (Originally, the project was started with the name "PicoSat," and changed later to "EnSat") is a miniature version of an Environmental Satellite which helps to record and analyze the environmental parameters such as altitude, pressure, temperature, humidity, and pollution level. Access this project in Githubhere. ## Inspiration I was always very interested in how things work. We had a television when I was young, and there were all these wires connected to the television. I was fascinated by how the tv worked and how this could show moving images. I always wondered about things like these as a child. I studied this, and now I’m in college learning more about it. For this project, I wanted to do something that included data transfer at a very low level. My country is not so advanced technologically. But last year, Nepal’s first satellite was launched into space. That inspired me. I might not be able to do that same thing now, but I wanted to try something smaller, and I built a miniature satellite. And that’s how this project came to be. I was working on the software, and my friends were working on the hardware, and that’s how we collaborated. :youtube]{vid=1tP2LEQJyNU} ## Why MongoDB? We had our Professor Dr. Gajendra Sharma supervising the project, but we were free to choose whatever we wanted. For the first time in this project, I used MongoDB; before that, I was not familiar with MongoDB. I was also not used to the GUI react part; while I was learning React, the course also included MongoDB. Before this project, I was using MySQL, I was planning on using MySQL again, but after following this course, I decided to switch to MongoDB. And this was good; transferring the data and storing the data is so much more comfortable with MongoDB. With MongoDB, we only have to fetch the data from the database and send it. The project is quite complicated, but MongoDB made it so much easier on the software level, so that’s why we chose MongoDB for the project. ## How it works A satellite with a microcontroller and sensors transmits the environmental data to Ground Station through radio frequency using ISM band 2.4 GHz. The Ground Station has a microcontroller and receiver connected to a computer where the data is stored in the MongoDB database. The API then fetches data from the database providing live data and history data of the environmental parameters. Using the data from API, the information is shown in the GUI built in React. This was our group semester project where the Serialport package for data communication, MongoDB for database, and React was used for the GUI. Our report in the GitHub repository can also tell you more in detail how everything works. It is a unique and different project, and it is our small effort to tackle the global issue of climate change and environmental pollution. The project includes both hardware and software parts. EnSat consists of multidisciplinary domains. Creating it was a huge learning opportunity for us as we made our own design and architecture for the project's hardware and software parts. This project can inspire many students to try MongoDB with skills from different domains and try something good for our world. ![ ## Challenges and learnings There was one challenging part, and I was stuck for three days. It made me build my own serial data port to be able to get data in the server. That was a difficult time. With MongoDB, there was not any difficulty. It made the job so much easier. It’s also nice to share that we participated in three competitions and that we won three awards. One contest was where the satellite is actually dropped from the drone from the height, and we have to capture the environmental data at different heights as it comes down. It was the first competition of that kind in my country, and we won that one. We won another one for the best product and another for the best product under the Advancing Towards Smart Cities, Sustainable Development Goals category. I learned so many things while working on this project. Not only React and MongoDB, but I also learned everything around the hardware: Arduino programming, programming C for Arduino, the hardware level of programming. And the most important thing I learned is never to give up. At times it was so frustrating and difficult to get everything running. If you want to do something, keep on trying, and sometimes it clicks in your mind, and you just do it, and it happens. I’m glad that MongoDB is starting programs for Students. These are the kind of things that motivate us. Coming from a not so developed country, we sometimes feel a bit separated. It’s so amazing that we can actually take part in this kind of program. It’s the most motivating factor in doing engineering and studying engineering. Working on these complex projects and then being recognized by MongoDB is a great motivation for all of us.
md
{ "tags": [ "JavaScript", "MongoDB", "C++", "Node.js", "React" ], "pageDescription": "An environmental satellite to get information about your environment.", "contentType": "Code Example" }
EnSat
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/how-to-connect-mongodb-atlas-to-vercel-using-the-new-integration
created
# How to Connect MongoDB Atlas to Vercel Using the New Integration Getting a MongoDB Atlas database created and linked to your Vercel project has never been easier. In this tutorial, we’ll get everything set up—including a starter Next.js application—in just minutes. ## Prerequisites For this tutorial, you’ll need: * MongoDB Atlas (sign up for free). * Vercel account (sign up for free). * Node.js 14+. > This tutorial will work with any frontend framework if Next.js isn’t your preference. ## What is Vercel? Vercel is a cloud platform for hosting websites and web applications. Deployment is seamless and scaling is automatic. Many web frameworks will work on Vercel, but the one most notable is Vercel’s own Next.js. Next.js is a React-based framework and has many cool features, like built-in routing, image optimization, and serverless and Edge Functions. ## Create a starter project For our example, we are going to use Next.js. However, you can use any web framework you would like. We’ll use the `with-mongodb` example app to get us started. This will give us a Next.js app with MongoDB Atlas integration already set up for us. ```bash npx create-next-app --example with-mongodb vercel-demo -y ``` We are using the standard `npx create-next-app` command along with the `--example with-mongodb` parameter which will give us our fully integrated MongoDB application. Then, `vercel-demo` is the name of our application. You can name yours whatever you would like. After that completes, navigate to your application directory: ```bash cd vercel-demo ```` At this point, we need to configure our MongoDB Atlas database connection. Instead of manually setting up our database and connection within the MongoDB Atlas dashboard, we are going to do it all through Vercel! Before we move over to Vercel, let’s publish this project to GitHub. Using the built-in Version Control within VS Code, if you are logged into your GitHub account, it’s as easy as pressing one button in the Source Control tab. I’m going to press *Publish Branch* and name my new repo `vercel-integration`. ## Create a Vercel project and integrate MongoDB From your Vercel dashboard, create a new project and then choose to import a GitHub repository by clicking Continue with GitHub. Choose the repo that you just created, and then click Deploy. This deployment will actually fail because we have not set up our MongoDB integration yet. Go back to the main Vercel dashboard and select the Integrations tab. From here, you can browse the marketplace and select the MongoDB Atlas integration. Click Add Integration, select your account from the dropdown, and then click continue. Next, you can either add this integration to all projects or select a specific project. I’m going to select the project I just created, and then click Add Integration. If you do not already have a MongoDB Atlas account, you can sign up for one at this step. If you already have one, click “Log in now.” The next step will allow you to select which Atlas Organization you would like to connect to Vercel. Either create a new one or select an existing one. Click Continue, and then I Acknowledge. The final step allows you to select an Atlas Project and Cluster to connect to. Again, you can either create new ones or select existing ones. After you have completed those steps, you should end up back in Vercel and see that the MongoDB integration has been completed. If you go to your project in Vercel, then select the Environment Variables section of the Settings page, you’ll see that there is a new variable called `MONGODB_URI`. This can now be used in our Next.js application. For more information on how to connect MongoDB Atlas with Vercel, see our documentation. ## Sync Vercel settings to local environment All we have to do now is sync our environment variables to our local environment. You can either manually copy/paste your `MONGODB_URI` into your local `.env` file, or you can use the Vercel CLI to automate that. Let’s add the Vercel CLI to our project by running the following command: ```bash npm i vercel ``` In order to link our local project with our Vercel project, run the following command: ```bash vercel ``` Choose a login method and use the browser pop-up to authenticate. Answer *yes* to set up and deploy. Select the appropriate scope for your project. When asked to link to an existing project, type *Y* and press *enter*. Now, type the name of your Vercel project. This will link the local project and run another build. Notice that this build works. That is because the environment variable with our MongoDB connection string is already in production. But if you run the project locally, you will get an error message. ```bash npm run dev ``` We need to pull the environment variables into our project. To do that, run the following: ```bash vercel env pull ``` Now, every time you update your repo, Vercel will automatically redeploy your changes to production! ## Conclusion In this tutorial, we set up a Vercel project, linked it to a MongoDB Atlas project and cluster, and linked our local environment to these. These same steps will work with any framework and will provide you with the local and production environment variables you need to connect to your MongoDB Atlas database. For an in-depth tutorial on Next.js and MongoDB, check out How to Integrate MongoDB Into Your Next.js App. If you have any questions or feedback, check out our MongoDB Community forums and let us know what you think.
md
{ "tags": [ "Atlas", "JavaScript", "Vercel", "Node.js" ], "pageDescription": "Getting a MongoDB Atlas database created and linked to your Vercel project has never been easier. In this tutorial, we’ll get everything set up—including a starter Next.js application—in just minutes.", "contentType": "Quickstart" }
How to Connect MongoDB Atlas to Vercel Using the New Integration
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/realm/realm-javascript-nan-to-n-api
created
# How We Migrated Realm JavaScript From NAN to N-API Recently, the Realm JavaScript team has reimplemented the Realm JS Node.js SDK from the ground up to use N-API. In this post, we describe the need to migrate to N-API because of breaking changes in the JavaScript Virtual Machine and how we approached it in an iterative way. ## HISTORY Node.js and Electron are supported platforms for the Realm JS SDK. Our embedded library consists of a JavaScript library and a native code Node.js addon that interacts with the Realm Database native code. This provides the database functionality to the JS world. It interacts with the V8 engine, which is the JavaScript virtual machine used in Node.js that executes the JavaScript user code. There are different ways to write a Node.js addon. One way is to use the V8 APIs directly. Another is to use an abstraction layer that hides the V8 specifics and provides a stable API across versions of Node.js. The JavaScript V8 virtual machine is a moving target. Its APIs are constantly changing between versions. Some are deprecated, and new APIs are introduced all the time. Previous versions of Realm JS used NAN to interact with the V8 virtual machine because we wanted to have a more stable layer of APIs to integrate with. While useful, this had its drawbacks since NAN also needed to handle deprecated V8 APIs across versions. And since NAN integrates tightly with the V8 APIs, it did not shield us from the virtual machine changes underneath it. In order to work across the different Node.js versions, we needed to create a native binary for every major Node.js version. This sometimes required major effort from the team, resulting in delayed releases of Realm JS for a new Node.js version. The changing VM API functionality meant handling the deprecated V8 features ourselves, resulting in various version checks across the code base and bugs, when not handled in all places. There were many other native addons that have experienced the same problem. Thus, the Node.js team decided to create a stable API layer build within Node.js itself, which guarantees API stability across major Node.js versions regardless of the virtual machine API changes underneath. This API layer is called N-API. It not only provides API stability but also guarantees ABI stability. This means binaries compiled for one major version are able to run on later major versions of Node.js. N-API is a C API. To support C++ for writing Node.js addons, there is a module called node-addon-api. This module is a more efficient way to write code that calls N-API. It provides a layer on top of N-API. Developers use this to create and manipulate JavaScript values with integrated exception handling that allows handling JavaScript exceptions as native C++ exceptions and vice versa. ## N-API Challenges When we started our move to N-API, the Realm JavaScript team decided early on that we would build an N-API native module using the node-addon-api library. This is because Realm JS is written in C++ and there is no reason not to choose the C++ layer over the pure N-API C layer. The motivation of needing to defend against breaking changes in the JS VM became one of the goals when doing a complete rewrite of the library. We needed to provide exactly the same behavior that currently exists. Thankfully, the Realm JS library has an extensive suite of tests which cover all of the supported features. The tests are written in the form of integration tests which test the specific user API, its invocation, and the expected result. Thus, we didn't need to handle and rewrite fine-grained unit tests which test specific details of how the implementation is done. We chose this tack because we could iteratively convert our codebase to N-API, slowly converting sections of code while running regression tests which confirmed correct behavior, while still running NAN and N-API at the same time. This allowed us to not tackle a full rewrite all at once. One of the early challenges we faced is how we were going to approach such a big rewrite of the library. Rewriting a library with a new API while at the same time having the ability to test as early as possible is ideal to make sure that code is running correctly. We wanted the ability to perform the N-API migration partially, reimplementing different parts step by step, while others still remained on the old NAN API. This would allow us to build and test the whole project with some parts in NAN and others in N-API. Some of the tests would invoke the new reimplemented functionality and some tests would be using the old one. Unfortunately, NAN and N-API diverged too much starting from the initial setup of the native addon. Most of the NAN code used the `v8::Isolate` and the N-API code had the opaque structure `Napi::Env` as a substitute to it. Our initialization code with NAN was using the v8::Isolate to initialize the Realm constructor in the init function ``` clike static void init(v8::Local exports, v8::Local module, v8::Local context) { v8::Isolate* isolate = context->GetIsolate(); v8::Local realm_constructor = js::RealmClass::create_constructor(isolate); Nan::Set(exports, realm_constructor->GetName(), realm_constructor); } NODE_MODULE_CONTEXT_AWARE(Realm, realm::node::init); ``` and our N-API equivalent for this code was going to be ``` clike static Napi::Object NAPI_Init(Napi::Env env, Napi::Object exports) { return exports; } NODE_API_MODULE(realm, NAPI_Init) ``` When we look at the code, we can see that we can't call `v8::isolate`, which we used in our old implementation, from the exposed N-API. The problem becomes clear: We don't have any access to the `v8::Isolate`, which we need if we want to invoke our old initialization logic. Fortunately, it turned out we could just use a hack in our initial implementation. This enabled us to convert certain parts of our Realm JS implementation while we continued to build and ship new versions of Realm JS with parts using NAN. Since `Napi::Env` is just an equivalent substitute for `v8::Isolate`, we can check if it has a `v8::Isolate` hiding in it. As it turns out, this is a way to do this - but it's a private member. We can grab it from memory with ``` clike napi_env e = env; v8::Isolate* isolate = (v8::Isolate*)e + 3; ``` and our NAPI_init method becomes ``` clike static Napi::Object NAPI_Init(Napi::Env env, Napi::Object exports) { //NAPI: FIXME: remove when NAPI complete napi_env e = env; v8::Isolate* isolate = (v8::Isolate*)e + 3; //the following two will fail if isolate is not found at the expected location auto currentIsolate = isolate->GetCurrent(); auto context = currentIsolate->GetCurrentContext(); // realm::node::napi_init(env, currentIsolate, exports); return exports; } ``` Here, we invoke two functions — `isolate->GetCurrent()` and `isolate->GetCurrentContext()` — to verify early on that the pointer to the `v8::Isolate` is correct and there are no crashes. This allowed us to extract a simple function which can return a `v8::Isolate` from the `Napi::Env` structure any time we needed it. We continued to switch all our function signatures to use the new `Napi::Env` structure, but the implementation of these functions could be left unchanged by getting the `v8::Isolate` from `Napi::Env` where needed. Not every NAN function of Realm JS could be reimplemented this way but still, this hack allowed for an easy process by converting the function to NAPI, building and testing. It then gave us the freedom to ship a fully NAPI version without the hack once we had time to convert the underlying API to the stable version. ## What We Learned Having the ability to build the entire project early on and then even run it in hybrid mode with NAN and N-API allowed us to both refactor and continue to ship net new features. We were able to run specific tests with the new functionality while the other parts of the library remained untouched. Being able to build the project is more valuable than spending months reimplementing with the new API, only then to discover something is not right. As the saying goes, "Test early, fail fast." Our experience while working with N-API and node-addon-api was positive. The API is easy to use and reason. The integrated error handling is of a great benefit. It catches JS exceptions from JS callbacks and rethrows them as C++ exceptions and vice versa. There were some quirks along the way with how node-addon-api handled allocated memory when exceptions were raised, but we were easily able to overcome them. We have submitted PRs for some of these fixes to the node-addon-api library. Recently, we flipped the switch to one of the major features we gained from N-API - the build system release of the Realm JS native binary. Now, we build and release a single binary for every Node.js major version. When we finished, the Realm JS with N-API implementation resulted in much cleaner code than we had before and our test suite was green. The N-API migration fixed some of the major issues we had with the previous implementation and ensures our future support for every new major Node.js version. For our community, it means a peace of mind that Realm JS will continue to work regardless of which Node.js or Electron version they are working with - this is the reason why the Realm JS team chose to replatform on N-API. To learn more, ask questions, leave feedback, or simply connect with other MongoDB developers, visit our community forums. Come to learn. Stay to connect. > > >To get started with RealmJS, visit our GitHub >Repo. Getting started with Atlas is >also easy. Sign up for a free MongoDB >Atlas account to start working with >all the exciting new features of MongoDB, including Realm and Charts, >today! > >
md
{ "tags": [ "Realm", "JavaScript", "Node.js" ], "pageDescription": "The Realm JavaScript team has reimplemented the Realm JS Node.js SDK from the ground up to use N-API. Here we describe how and why.", "contentType": "News & Announcements" }
How We Migrated Realm JavaScript From NAN to N-API
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/python/merge-url
created
# MergeURL - Python Example App ## Creators Mehant Kammakomati and Sai Vittal B contributed this project. ## About the project MergeURL is an instant URL shortening and merging service that lets you merge multiple URLs into a single short URL. You can merge up to 5 URLs within no time and share one single shortened URL. MergeURL lifts off the barriers of user registration and authentication, making it instant to use. It also provides two separate URLs to view the URLs' list and open all the browser URLs. MergeURL is ranked #2 product of the day on ProductHunt. It is used by people across the world, with large numbers coming from the United States and India. # Inspiration We had this problem of sharing multiple URLs in a message or an email or via Twitter. We wanted to create a trustworthy service that can merge all those URLs in a single short one. We tried finding out if there were already solutions to this problem, and most of the solutions we found, required an account creation or to put my credentials. We wanted to have something secure, trustworthy, but that doesn’t require user authentication. Sai Vittal worked mostly on the front end of the application, and I (Mehant) worked on the back end and the MongoDB database. It was a small problem that we encountered that led us to build MergeURL. We added our product to ProductHunt last August, and we became number #2 for a while; this gave us the kickstart to reach a wider audience. We currently have around 181.000 users and around 252.000 page views. The number of users motivates us to work a lot on updates and add more security layers to it. ## Why MongoDB? For MergeURL, MongoDB plays a crucial role in our URL shortening and merging algorithm, contributing to higher security and reducing data redundancy. MongoDB Atlas lifts off the burden to host and maintain databases that made our initial development of MergeURL 10X faster, and further maintaining and monitoring has become relatively easy. Firstly we discussed whether to go for a SQL or NoSQL database. According to the algorithms, our primary approach is that going with a NoSQL database would be the better option. MongoDB is at the top of the chart; it is the one that comes to mind when you think about NoSQL databases. Client libraries like PyMongo make it so much easier to connect and use MongoDB. We use MongoDB Atlas itself because it’s already hosted. It made it much easier for us to work with it. We’ve been using the credits that we received from the GitHub Student Developer Pack offer. ## How it works The frontend is written using React, and it’s compiled into the optimal static assets. As we know, the material is a relatively simple service; we don’t need a lot of complicated stuff in the back end. Therefore we used a microservice; we used Flask to write the backend server. And we use MongoDB. We have specific algorithms that work on the URLs, and MongoDB played a vital role in implementing those algorithms and taking control of the redundancy. It works relatively smoothly. You go to our website; you fill out the URLs you want to shorten, and it will give you a short URL that includes all the URLs. ## Challenges and lessons learned One of the challenges lies in our experience. We both didn’t have any experience launching a product and getting it to users. Launching MergeURL was the first time we did this, and it went very well. MongoDB specific, we didn’t have any problems. Specifically (Mehant), I struggled a lot with SQL databases in my freshman and sophomore years. I’m pleased that I found MongoDB; it saves a lot of stress and frustration. Everything is relatively easy. Besides that, the documents are quite flexible; it’s not restricted as with SQL. We can create many more challenges with MongoDB. I’ve learned a lot about the process. Converting ideas into actual implementation was the most important thing. One can have many ideas, but turning them into life is essential. At the moment, the project merges the URLs. We are thinking of maybe adding a premium plan where people can get user-specific extensions. We use a counter variable to give those IDs to the shortened URL, but we would like to implement adding user specific extensions. And we would like to add analytics. How many users are clicking on your shortened URL? Where is the traffic coming from? We are thrilled with the product as it is, but there are plenty of future ideas.
md
{ "tags": [ "Python", "Atlas", "Flask" ], "pageDescription": "Shorten multiple URLs instantly while overcoming the barriers of user registration.", "contentType": "Code Example" }
MergeURL - Python Example App
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/use-rongo-store-roblox-game-data-in-atlas
created
# Storing Roblox Game Data in MongoDB Atlas Using Rongo ## Introduction This article will walk you through setting up and using Rongo in your Roblox games, and storing your data in MongoDB Atlas. Rongo is a custom SDK that uses the MongoDB Atlas Data API. We’ll walk through the process of inserting Rongo into your game, setting Rongo up with MongoDB, and finally, using Rongo to store a player's data in MongoDB Atlas as an alternative to Roblox’s DataStores. Note that this library is open source and not supported by MongoDB. ## Prerequisites Before you can start using Rongo, there are a few things you’ll need to do. If you have not already, you’ll need to install Roblox Studio and create a new experience. Next, you’ll need to set up a MongoDB Atlas Account, which you can learn how to do using MongoDB’s Getting Started with Atlas article. Once you have set up your MongoDB Atlas cluster, follow the article on getting started with the MongoDB Data API. After you’ve done all of the above steps, you can get started with installing Rongo into your game! ## Installing Rongo into your game The installation of Rongo is fairly simple. However, you have a couple of ways to do it! ### Using Roblox’s Toolbox (recommended) First, head to the Rongo library page and press the **Get** button. After that, you can press the “Try in studio” button, which will open Roblox Studio and insert the module into Studio. If you wish to insert the module into a specific experience, then open Roblox Studio, load your experience, and navigate to the **View** tab. Click on **Toolbox**, navigate to the **Inventory** tab of the Toolbox, and locate Rongo. Or, search for it in the toolbox and then drag it into the viewport. ### Downloading the Rongo model You can download Rongo from our Github page by visiting our releases page and downloading the **Rongo.rbxm** file or the **Rongo.lua** file. After you have downloaded either of the files, open Roblox studio and load your experience. Next, navigate to the **View** tab and open the **Explorer** window. You can then right click on **ServerScriptService** and press the **Insert from file** button. Once you’ve pressed the **Insert from file** button, locate the Rongo file and insert it into Roblox Studio. ## Setting up Rongo in your game First of all, you’ll need to ensure that the Rongo module is placed in **ServerScriptService**. Next, you must enable the **Allow HTTP Requests **setting in your games settings (security tab). After you have done the above two steps, create a script in **ServerScriptService** and paste in the example code below. ```lua local Rongo = require(game:GetService("ServerScriptService"):WaitForChild("Rongo")) local Client = Rongo.new(YOUR_API_ID, YOUR_API_KEY) local Cluster = Client:GetCluster("ExampleCluster") local Database = Cluster:GetDatabase("ExampleDatabase") local Collection = Database:GetCollection("ExampleCollection") ``` The above code will allow you to modify your collection by adding data, removing data, and fetching data. You’ll need to replace the arguments with the correct data to ensure it works correctly! Refer to our documentation for more information on the functions. To fetch data, you can use this example code: ```lua local Result = Collection:FindOne({"Name"] = "Value"}) ``` You can then print the result to the console using print(Result). Once you’ve gotten the script setup, you’re all good to go and you can move onto the next section of this article! ## Using Rongo to save player data This section will teach you how to save a player's data when they join and leave your game! We’ll be using the script we created in the previous section as a base for the new script. First of all, we’re going to create a function which will be fired whenever the player joins the game. This function will load the players data if they’ve had their data saved before! ```lua Players.PlayerAdded:Connect(function(Player: Player) --// Setup player leaderstats local Leaderstats = Instance.new("Folder") Leaderstats.Parent = Player Leaderstats.Name = "leaderstats" local Gold = Instance.new("IntValue") Gold.Parent = Leaderstats Gold.Name = "Gold" Gold.Value = 0 --// Fetch data from MongoDB local success, data = pcall(function() return Collection:FindOne({["userId"] = Player.UserId}) end) --// If an error occurs, warn in console if not success then warn("Failed to fetch player data from MongoDB") return end --// Check if data is valid if data and data["playerGold"] then --// Set player gold leaderstat value Gold.Value = data["playerGold"] end --// Give player +5 gold each time they join Gold.Value += 5 end) ``` In the script above, it will first create a leaderstats folder and gold value when the player joins, which will appear in the player list. Next, it will fetch the player data and set the value of the player’s gold to the saved value in the collection. And finally, it will give the player an additional five golds each time they join. Next, we’ll make a function to save the player’s data whenever they leave the game. ```lua Players.PlayerRemoving:Connect(function(Player: Player) --// Get player gold local Leaderstats = Player:WaitForChild("leaderstats") local Gold = Leaderstats:WaitForChild("Gold") --// Update player gold in database local success, data = pcall(function() return Collection:UpdateOne({["userId"] = Player.UserId}, { ["userId"] = Player.UserId, ["playerGold"] = Gold.Value } , true) end) end) ``` This function will first fetch the player's gold in game and then update it in MongoDB with the upsert value set to true, so it will insert a new document in case the player has not had their data saved before. You can now test it in game and see the data updated in MongoDB once you leave! If you’d like a more vanilla Roblox DataStore experience, you can also use [MongoStore, which is built on top of Rongo and has identical functions to Roblox’s DataStoreService. Here is the full script used for this article: ```lua local Rongo = require(game:GetService("ServerScriptService"):WaitForChild("Rongo")) local Client = Rongo.new("MY_API_ID", "MY_API_KEY") local Cluster = Client:GetCluster("Cluster0") local Database = Cluster:GetDatabase("ExampleDatabase") local Collection = Database:GetCollection("ExampleCollection") local Players = game:GetService("Players") Players.PlayerAdded:Connect(function(Player: Player) --// Setup player leaderstats local Leaderstats = Instance.new("Folder") Leaderstats.Parent = Player Leaderstats.Name = "leaderstats" local Gold = Instance.new("IntValue") Gold.Parent = Leaderstats Gold.Name = "Gold" Gold.Value = 0 --// Fetch data from MongoDB local success, data = pcall(function() return Collection:FindOne({"userId"] = Player.UserId}) end) --// If an error occurs, warn in console if not success then warn("Failed to fetch player data from MongoDB") return end --// Check if data is valid if data and data["playerGold"] then --// Set player gold leaderstat value Gold.Value = data["playerGold"] end --// Give player +5 gold each time they join Gold.Value += 5 end) Players.PlayerRemoving:Connect(function(Player: Player) --// Get player gold local Leaderstats = Player:WaitForChild("leaderstats") local Gold = Leaderstats:WaitForChild("Gold") --// Update player gold in database local success, data = pcall(function() return Collection:UpdateOne({["userId"] = Player.UserId}, { ["userId"] = Player.UserId, ["playerGold"] = Gold.Value } , true) end) end) ``` ## Conclusion In summary, Rongo makes it seamless to store your Roblox game data in MongoDB Atlas. You can use it for whatever need be, from player data stores to fetching data from your website's database. Learn more about Rongo on the [Roblox Developer Forum thread. View Rongo’s source code on our Github page.
md
{ "tags": [ "Atlas" ], "pageDescription": "This article will walk you through the process of using Rongo, a custom SDK built with the Atlas Data API to store data from Roblox Games to MongoDB Atlas. In this article, you’ll learn how to create a script to save data.", "contentType": "Article" }
Storing Roblox Game Data in MongoDB Atlas Using Rongo
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/javascript/chember-example-app
created
# Chember ## Creators Alper Kızıloğlu, Aytuğ Turanlıoğlu, Batu El, Begüm Ortaoğlu, Bora Demiral, Efecan Bahçıvanoğlu, Ege Çavuşoğlu and Ömer Ekin contributed this amazing project. ## About the project With Chember, you can find streetball communities near you. Create your streetball profile, discover the Chember map, see live court densities, and find or create games. We designed Chember for basketball lovers who want to connect with streetball communities around them. ## Inspiration I (Ege) started this project with a few friends from high school. I was studying abroad in Italy last semester, and Italy was one of the first countries that had to, you know, quarantine and all that when Covid-19 hit. In Italy, I had some friends from high school from my high school basketball team, and they suggested going out and playing streetball with local folks. I liked to do that, but on the other hand, I also wanted to build complex software projects. I experimented with frameworks and MongoDB for school projects and wanted to learn more about how to turn this into a project. I told my friends about the idea of building a streetball app. The advantage was that we didn’t have to talk to our users cause we were the users. I took on the technical responsibility for the project, and that’s how it started. Now we’re a student formed startup that gained 10k users within three weeks after our launch, and we’re continuing to grow. :youtube]{vid=UBEPpdAaKd4} ## Why MongoDB? I was already familiar with MongoDB. With the help of MongoDB, We were able to manage various types of data (like geolocation) and carry it out to our users performantly and without worrying about the infrastructure thanks to MongoDB. We work very hard to bond more communities around the world through streetball, and we are sure that we will never have to worry about the storage and accessibility of our data. To give you an example. In our app, you can create streetball games, you can create teams, and these teams will be playing each other. And usually, we had games only for individual players, but now we would like to introduce a new feature called teams, which will also allow us to have tournament structures in the app. And the best part of MongoDB is that I can change the schema. We don't worry about schemas; we add the fields we want to have, and then boom, we can have the team vs. team feature with the extra fields needed. ## How it works I first started to build our backend and to integrate it with MongoDB. This is my first experience doing this complex project, and I had no other help than Google and tutorials. There are two significant projects that I should mention: Expo, a cross-platform mobile app development framework, and the other is MongoDB because it helped us start prototyping and building very quickly with the backend. After four or five months, our team started to grow. More people from my high school team came onboard, so I began to teach the group two front end development and one backend development. By the end of the summer, we were able to launch our app. My biggest concern was how the backend data was going to be handled when it launched because when we founded our Instagram profile, we were getting a lot of hits. A lot of people seemed to be excited about our app. All the apps I had built before were school projects, so I never really had to worry about load balancing. And now I had to! We got around 10.000 users in the first weeks after the launch, and we only had a tiny marketing budget. It’s pocket money from university students. We’ve been using our credits from the [GitHub Student Developer Pack to maintain the MongoDB Atlas cluster. For our startup, and for most companies, data is the most important thing. We have a lot of data, user data, and street data from courts worldwide. We have like 2500 courts right now. In Turkey, we have like 2300 of them. So we have quite a lot of useful data, we have very detailed information about each court. So this data is vital for our company, and using Atlas, it was so easy to analyze the data, get backups, and integrate with the back-end. MongoDB helped us a lot with building this project. ## Challenges and learnings COVID-19 was a challenge for us. Many people were reluctant about our app. Our app is more about bringing people together for streetball. To be prepared for COVID-19, we added a new approach to the code, allowing people to practice social distancing while still being active. When you go to a park or a streetball court, you can check the app, and you can notify the number of people playing at that moment. With this data, we can run schedulers every week to identify the court density, like the court's human density. Before going to that court, you already know how often many people are going to be in that court. I also want to share about our upcoming plans. Our main goal is to grow the community and help people find more peers to play streetball. Creating communities is essential for us because, especially in Turkey and the United States where I've played street ball, there's a stereotype. If you're not a 20-year-old male over six feet or so, you don't fit into the streetball category. Because of this, a lot of people are hesitating to go out and play streetball. We want to break this stereotype cause many folks from other age groups and genders also play streetball! So we want to allow users to match their skills and physical attributes and implement unique features like women-only games. What we want to do in the first place is to break the stereotype and to build inclusive communities. We will be launching our tournament mode this summer, we're out like almost at the testing stage, but we're not sure when to throw it due to COVID-19 and the vaccinations are coming. So we'll see how it goes. Because launching a tournament mode during COVID might not be the best idea. To keep people active during winter, we are planning to add private courts to our map. So our map is one of our most vital assets, you can find all the streetball courts around you know in the world, and get detailed information on the map. We're hoping to extend our data for private courts and allow people to book these courts and keep active during winter. I want to share the story. I'm sure many people want to build great things, and these tools are here for all of us to use. So I think this story could also inspire them to turn their ideas into reality.
md
{ "tags": [ "JavaScript" ], "pageDescription": "Instantly find the street ball game you are looking for anytime and anywhere with Chember!", "contentType": "Code Example" }
Chember
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/python/python-crud-mongodb
created
# Simple CRUD operations with Python and MongoDB For the absolute beginner, there's nothing more simple than a CRUD tutorial. Create, Read, Update, and Delete documents using this mongodb tutorial for Python. ## Introduction To get started, first you'll need to understand that we use pymongo, our python driver, to connect your application to MongoDB. Once you've installed the driver, we'll build a simple CRUD (Create, Read, Update, Delete) application using FastAPI and MongoDB Atlas. The application will be able to create, read, update, and delete documents in a MongoDB database, exposing the functionality through a REST API. You can find the finished application on Github here. ## About the App You'll Build This is a very basic example code for managing books using a REST API. The REST API has five endpoints: `GET /book`: to list all books `GET /book/`: to get a book by its ID `POST /book`: to create a new book `PUT /book/`: to update a book by its ID `DELETE /book/`: to delete a book by its ID To build the API, we'll use the FastAPI framework. It's a lightweight, modern, and easy-to-use framework for building APIs. It also generates Swagger API documentation that we'll put to use when testing the application. We'll be storing the books in a MongoDB Atlas cluster. MongoDB Atlas is MongoDB's database-as-a-service platform. It's cloud-based and you can create a free account and cluster in minutes, without installing anything on your machine. We'll use PyMongo to connect to the cluster and query data. This application uses Python 3.6.
md
{ "tags": [ "Python", "FastApi" ], "pageDescription": "Get started with Python and MongoDB easily with example code in Github", "contentType": "Code Example" }
Simple CRUD operations with Python and MongoDB
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/how-prisma-introspects-a-schema-from-a-mongodb-database
created
# How Prisma Introspects a Schema from a MongoDB Database Prisma ORM (object relational-mapper) recently released support for MongoDB. This represents the first time Prisma has supported a database outside of the SQL world. Prisma has been known for supporting many relational databases, but how did it end up being able to support the quite different MongoDB? > 🕴️ I work as the Engineering Manager of Prisma’s Schema team. We are responsible for the schema management parts of Prisma, which most prominently include our migrations and introspection tools, as well as the Prisma schema language and file (and our awesome Prisma VS Code extension!). > > The other big team working on Prisma object relational-mapper (ORM) is the Client team that builds Prisma Client and the Query Engine. These let users interact with their database to read and manipulate data. > > In this blog post, I summarize how our team got Prisma’s schema introspection feature to work for the new MongoDB connector and the interesting challenges we solved along the way. ## Prisma Prisma is a Node.js database ORM built around the Prisma schema language and the Prisma schema file containing an abstract representation of a user’s database. When you have tables with columns of certain data types in your database, those are represented as models with fields of a type in your Prisma schema. Prisma uses that information to generate a fully type-safe TypeScript/JavaScript Prisma Client that makes it easy to interact with the data in your database (meaning it will complain if you try to write a `String` into a `Datetime` field, and make sure you, for example, include information for all the non-nullable columns without a default and similar). Prisma Migrate uses your changes to the Prisma schema to automatically generate the SQL required to migrate your database to reflect that new schema. You don’t need to think about the changes necessary. You just write what you want to achieve, and Prisma then intelligently generates the SQL DDL (Data Definition Language) for that. For users who want to start using Prisma with their existing database, Prisma has a feature called Introspection. You call the CLI command `prisma db pull` to “pull in” the existing database schema, and Prisma then can create the Prisma schema for you automatically, so your existing database can be used with Prisma in seconds. This works the same for PostgreSQL, MySQL, MariaDB, SQL Server, CockroachDB, and even SQLite and relies on _relational databases_ being pretty similar, having tables and columns, understanding some dialect of SQL, having foreign keys, and concepts like referential integrity. ## Prisma + MongoDB One of our most requested features was support for Prisma with MongoDB. The feature request issue on GitHub for MongoDB support from January 2020 was for a long time by far the most popular one, having gained more than a total of 800 reactions. MongoDB is known for its flexible schema and the document model, where you can store JSON-like documents. MongoDB takes a different paradigm from relational databases when modeling data – there are no tables, no columns, schemas, or foreign keys to represent relations between tables. Data is often stored grouped in the same document with related data or “denormalized,” which is different from what you would see in a relational database. So, how could these very different worlds be brought together? ## Prisma and MongoDB: Schema team edition For our team, this meant figuring out: 1. How to represent a MongoDB structure and its documents in a Prisma schema. 2. How to migrate said data structures. 3. How to let people introspect their existing MongoDB database to easily be able to start using Prisma. Fortunately solving 1 and 2 was relatively simple: 1. Where relational databases have tables, columns, and foreign keys that are mapped to Prisma’s models, with their fields and relations, MongoDB has equivalent collections, fields, and references that could be mapped the same way. Prisma Client can use that information to provide the same type safety and functionality on the Client side. Relational database Prisma MongoDB Table → Model ← Collection Column → Field ← Field Foreign Key → Relation ← Reference With no database-side schema to migrate, creating and updating indexes and constraints was all that was needed for evolving a MongoDB database schema. As there is no SQL to modify the database structure (which is not written down or defined anywhere), Prisma also did not have to create migration files with Data Definition Language (DDL) statements and could just scope it down to allowing `prisma db push` to directly bring the database to the desired end state. A bigger challenge turned out to be the Introspection feature. ## Introspecting a schema with MongoDB With relational databases with a schema, there is always a way to inquire for the schema. In PostgreSQL, for example, you can query multiple views in a `information_schema` schema to figure out all the details about the structure of the database—to, for example, generate the DDL SQL required to recreate a database, or abstract it into a Prisma schema. Because MongoDB has a flexible schema (unless schemas are enforced through the schema validation feature), no such information store exists that could be easily queried. That, of course, poses a problem for how to implement introspection for MongoDB in Prisma. ## Research As any good engineering team would, we started by ... Googling a bit. No need to reinvent the wheel, if someone else solved the problem in the past before. Searches for “MongoDB introspection,” “MongoDB schema reverse engineering,” and (as we learned the native term) “MongoDB infer schema” fortunately brought some interesting and worthwhile results. ### MongoDB Compass MongoDB’s own database GUI Compass has a “Schema” tab in a collection that can analyze a collection to “provide an overview of the data type and shape of the fields in a particular collection.” It works by sampling 1000 documents from a collection that has at least 1000 documents in it, analyzing the individual fields and then presenting them to the user. ### `mongodb-schema` Another resource we found was Lucas Hrabovsky’s `mongodb-infer` repository from 2014. Later that year, it seemed to have merged/been replaced by `mongodb-schema`, which is updated to this day. It’s a CLI and library version of the same idea—and indeed, when checking the source code of MongoDB Compass, you see a dependency for `mongodb-schema` that is used under the hood. ## Implementing introspection for MongoDB in Prisma Usually, finding an open source library with an Apache 2.0 license means you just saved the engineering team a lot of time, and the team can just become a user of the library. But in this case, we wanted to implement our introspection in the same introspection engine we also use for the SQL databases—and that is written in Rust. As there is no `mongodb-schema` for Rust yet, we had to implement this ourselves. Knowing how `mongodb-schema` works, this turned out to be straightforward: We start by simply getting all collections in a database. The MongoDB Rust driver provides a handy `db.list_collection_names()` that we can call to get all collections—and each collection is turned into a model for Prisma schema. 🥂 To fill in the fields with their type, we get a sample of up to 1000 random records from each collection and loop through them. For each entry, we note which fields exist, and what data type they have. We map the BSON type to our Prisma scalar types (and native type, if necessary). Optimally, all entries have the same fields with the same data type, which is easily and cleanly mappable—and we are done! Often, not all entries in a collection are that uniform. Missing fields, for example, are expected and equivalent to `NULL` values in a relational database. ### How to present fields with different types But different types (for example, `String` and `Datetime`) pose a problem: Which type should we put into the Prisma schema? > 🎓 **Learning 1: Just choosing the most common data type is not a good idea.** > > In an early iteration of MongoDB introspection, we defaulted to the most common type, and left a comment with the percentage of the occurrences in the Prisma schema. The idea was that this should work most of the time and give the developer the best development experience—the better the types in your Prisma schema, the more Prisma can help you. > > But we quickly figured out when testing this that there was a slight (but logical) problem: Any time the Prisma Client encounters a type that does _not_ match what it has been told via the Prisma schema, it has to throw an error and abort the query. Otherwise, it would return data that does not adhere to its own generated types for that data. > > While we were aware this would happen, it was not intuitive to us _how often_ that would cause the Prisma Client to fail. We quickly learned about this when using such a Prisma schema with conflicting types in the underlying database with Prisma Studio, the built-in database GUI that comes with Prisma CLI (just run `npx prisma studio`). By default, it loads 100 entries of a model you view—and when there were ~5% entries with a different type in a database of 1000 entries, it was very common to hit that on the first page already. Prisma Studio (and also an app using these schemas) was essentially unusable for these data sets this way. Fortunately, _everything_ in MongoDB is a `Document`, which maps to a `Json` type field in Prisma. So, when a field has different data types, we use `Json` instead, output a warning in Prisma CLI, and put a comment above the field in the Prisma schema that we render, which includes information about the data types we found and how common they were. Output of Prisma CLI on conflicting data types Resulting Prisma schema with statistics on conflicting data types ### How to iterate on the data to get to a cleaner schema Using `Json` instead of a specific data type, of course, substantially lowers the benefit you get from Prisma and effectively enables you to write any JSON into the field (making the data even less uniform and harder to handle over time!). But at least you can read all existing data in Prisma Studio or in your app and interact with it. The preferred way to fix conflicting data types is to read and update them manually with a script, and then run `prisma db pull` again. The new Prisma schema should then show only the one type still present in the collection. > 🎓 **Learning 2: Output Prisma types in Prisma schema, not MongoDB types.** > > Originally, we outputted the raw type information we got from the MongoDB Rust driver, the BSON types, into our CLI warnings and Prisma schema comments to help our users iterate on their data and fix the type. It turned out that while this was technically correct and told the user what type the data was in, using the BSON type names was confusing in a Prisma context. We switched to output the Prisma type names instead and this now feels much more natural to users. While Prisma recommends everyone to clean up their data and minimize the amount of conflicting types that fall back to `Json`, that is, of course, also a valid choice. ### How to enrich the introspected Prisma schema with relations By adding relation information to your introspected Prisma schema, you can tell Prisma to handle a specific column like a foreign key and create a relation with the data in it. `user User @relation(fields: userId], references: [id])` creates a relation to the `User` model via the local `userId` field. So, if you are using MongoDB references to model relations, add `@relation` to them for Prisma to be able to access those in Prisma Client, emulate referential actions, and help with referential integrity to keep data clean. Right now, Prisma does not offer a way to detect or confirm the potential relations between different collections. We want to learn first how MongoDB users actually use relations, and then help them the optimal way. ## Summary Implementing a good introspection story for MongoDB was a fun challenge for our team. In the beginning, it felt like two very different worlds were clashing together, but in the end, it was straightforward to find the correct tradeoffs and solutions to get the optimal outcome for our users. We are confident we found a great combination that combines the best of MongoDB with what people want from Prisma. Try out [Prisma and MongoDB with an existing MongoDB database, or start from scratch and create one along the way.
md
{ "tags": [ "MongoDB", "Rust" ], "pageDescription": "In this blog, you’ll learn about Prisma and how we interact with MongoDB, plus the next steps after having a schema.", "contentType": "Article" }
How Prisma Introspects a Schema from a MongoDB Database
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/javascript/integrate-mongodb-vercel-functions-serverless-experience
created
# Integrate MongoDB into Vercel Functions for the Serverless Experience Working with Functions as a Service (FaaS), often referred to as serverless, but you're stuck when it comes to trying to get your database working? Given the nature of these serverless functions, interacting with a database is a slightly different experience than if you were to create your own fully hosted back end. Why is it a different experience, though? Databases in general, not just MongoDB, can have a finite amount of concurrent connections. When you host your own web application, that web application is typically connecting to your database one time and for as long as that application is running, so is that same connection to the database. Functions offer a different experience, though. Instead of an always-available application, you are now working with an application that may or may not be available at request time to save resources. If you try to connect to your database in your function logic, you'll risk too many connections. If the function shuts down or hibernates or similar, you risk your database connection no longer being available. In this tutorial, we're going to see how to use the MongoDB Node.js driver with Vercel functions, something quite common when developing Next.js applications. ## The requirements There are a few requirements that should be met prior to getting started with this tutorial, depending on how far you want to take it. - You must have a MongoDB Atlas cluster deployed, free tier (M0) or better. - You should have a Vercel account if you want to deploy to production. - A recent version of Node.js and NPM must be available. In this tutorial, we're not going to deploy to production. Everything we plan to do can be tested locally, but if you want to deploy, you'll need a Vercel account and either the CLI installed and configured or your Git host. Both are out of the scope of this tutorial. While we'll get into the finer details of MongoDB Atlas later in this tutorial, you should already have a MongoDB Atlas account and a cluster deployed. If you need help with either, consider checking out this tutorial. The big thing you'll need is Node.js. We'll be using it for developing our Next.js application and testing it. ## Creating a new Next.js application with the CLI Creating a new Next.js project is easy when working with the CLI. From a command line, assuming Node.js is installed, execute the following command: ```bash npx create-next-app@latest ``` You'll be prompted for some information which will result in your project being created. At any point in this tutorial, you can execute `npm run dev` to build and serve your application locally. You'll be able to test your Vercel functions too! Before we move forward, let’s add the MongoDB Node.js driver dependency: ```bash yarn add mongodb ``` We won't explore it in this tutorial, but Vercel offers a starter template with the MongoDB Atlas integration already configured. If you'd like to learn more, check out the tutorial by Jesse Hall: How to Connect MongoDB Atlas to Vercel Using the New Integration. Instead, we'll look at doing things manually to get an idea of what's happening at each stage of the development cycle. ## Configuring a database cluster in MongoDB Atlas At this point, you should already have a MongoDB Atlas account with a project and cluster created. The free tier is fine for this tutorial. Rather than using our imagination to come up with a new set of data for this example, we're going to make use of one of the sample databases available to MongoDB users. From the MongoDB Atlas dashboard, click the ellipsis menu for one of your clusters and then choose to load the sample datasets. It may take a few minutes, so give it some time. For this tutorial, we're going to make use of the **sample_restaurants** database, but in reality, it doesn't really matter as the focus of this tutorial is around setup and configuration rather than the actual data. With the sample dataset loaded, go ahead and create a new user in the "Database Access" tab of the dashboard followed by adding your IP address to the "Network Access" rules. You'll need to do this in order to connect to MongoDB Atlas from your Next.js application. If you choose to deploy your application, you'll need to add a `0.0.0.0` rule as per the Vercel documentation. ## Connect to MongoDB and cache connections for a performance optimized experience Next.js is one of those technologies where there are a few ways to solve the problem. We could interact with MongoDB at build time, creating a 100% static generated website, but there are plenty of reasons why we might want to keep things adhoc in a serverless function. We could also use the Atlas Data API in the function, but you'll get a richer experience using the Node.js driver. Within your Next.js project, create a **.env.local** file with the following variables: ``` NEXT_ATLAS_URI=YOUR_ATLAS_URI_HERE NEXT_ATLAS_DATABASE=sample_restaurants NEXT_ATLAS_COLLECTION=restaurants ``` Remember, we're using the **sample_restaurants** database in this example, but you can be adventurous and use whatever you'd like. Don't forget to swap the connection information in the **.env.local** file with your own. Next, create a **lib/mongodb.js** file within your project. This is where we'll handle the actual connection steps. Populate the file with the following code: ```javascript import { MongoClient } from "mongodb"; const uri = process.env.NEXT_ATLAS_URI; const options = { useUnifiedTopology: true, useNewUrlParser: true, }; let mongoClient = null; let database = null; if (!process.env.NEXT_ATLAS_URI) { throw new Error('Please add your Mongo URI to .env.local') } export async function connectToDatabase() { try { if (mongoClient && database) { return { mongoClient, database }; } if (process.env.NODE_ENV === "development") { if (!global._mongoClient) { mongoClient = await (new MongoClient(uri, options)).connect(); global._mongoClient = mongoClient; } else { mongoClient = global._mongoClient; } } else { mongoClient = await (new MongoClient(uri, options)).connect(); } database = await mongoClient.db(process.env.NEXT_ATLAS_DATABASE); return { mongoClient, database }; } catch (e) { console.error(e); } } ``` It might not look like much, but quite a bit of important things are happening in the above file, specific to Next.js and serverless functions. Specifically, take a look at the `connectToDatabase` function: ```javascript export async function connectToDatabase() { try { if (mongoClient && database) { return { mongoClient, database }; } if (process.env.NODE_ENV === "development") { if (!global._mongoClient) { mongoClient = await (new MongoClient(uri, options)).connect(); global._mongoClient = mongoClient; } else { mongoClient = global._mongoClient; } } else { mongoClient = await (new MongoClient(uri, options)).connect(); } database = await mongoClient.db(process.env.NEXT_ATLAS_DATABASE); return { mongoClient, database }; } catch (e) { console.error(e); } } ``` The goal of the above function is to give us a client connection to work with as well as a database. However, the finer details suggest that we need to only establish a new connection if one doesn't exist and to not spam our database with connections if we're in development mode for Next.js. The local development server behaves differently than what you'd get in production, hence the need to check. Remember, connection quantities are finite, and we should only connect if we aren't already connected. So what we're doing in the function is we're first checking to see if that connection exists. If it does, return it and let whatever is calling the function use that connection. If the connection doesn't exist and we're in development mode, we check to see if we have a cached session and use that if we do. Otherwise, we need to create a connection and either cache it for development mode or production. If you understand anything from the above code, understand that we're just creating connections if connections don't already exist. ## Querying MongoDB from a Vercel function in the Next.js application We've done the difficult part already. We have a connection management system in place for MongoDB to be used throughout our Vercel application. The next part involves creating API endpoints, in a near identical way to Express Framework, and consuming them from within the Next.js front end. So what does this look like exactly? Within your project, create a **pages/api/list.js** file with the following JavaScript code: ```javascript import { connectToDatabase } from "../../lib/mongodb"; export default async function handler(request, response) { const { database } = await connectToDatabase(); const collection = database.collection(process.env.NEXT_ATLAS_COLLECTION); const results = await collection.find({}) .project({ "grades": 0, "borough": 0, "restaurant_id": 0 }) .limit(10).toArray(); response.status(200).json(results); } ``` Vercel functions exist within the **pages/api** directory. In this case, we're building a function with the goal of listing out data. Specifically, we're going to list out restaurant data. In our code above, we are leveraging the `connectToDatabase` function from our connection management code. When we execute the function, we're getting a connection without worrying whether we need to create one or reuse one. The underlying function code handles that for us. With a connection, we can find all documents within a collection. Not all the fields are important to us, so we're using a projection to exclude what we don't want. Rather than returning all documents from this large collection, we're limiting the results to just a few. The results get returned to whatever code or external client is requesting it. If we wanted to consume the endpoint from within the Next.js application, we might do something like the following in the **pages/index.js** file: ```react import { useEffect, useState } from "react"; import Head from 'next/head' import Image from 'next/image' import styles from '../styles/Home.module.css' export default function Home() { const restaurants, setRestaurants] = useState([]); useEffect(() => { (async () => { const results = await fetch("/api/list").then(response => response.json()); setRestaurants(results); })(); }, []); return ( Create Next App MongoDB with Next.js! Example {restaurants.map(restaurant => ( {RESTAURANT.NAME} {restaurant.address.street} ))} ) } ``` Ignoring the boilerplate Next.js code, we added a `useState` and `useEffect` like the following: ```javascript const [restaurants, setRestaurants] = useState([]); useEffect(() => { (async () => { const results = await fetch("/api/list").then(response => response.json()); setRestaurants(results); })(); }, []); ``` The above code will consume the API when the component loads. We can then render it in the following section: ```react {restaurants.map(restaurant => ( {RESTAURANT.NAME} {restaurant.address.street} ))} ``` There isn't anything out of the ordinary happening in the process of consuming or rendering. The heavy lifting that was important was in the function itself as well as our connection management file. ## Conclusion You just saw how to use MongoDB Atlas with Vercel functions, which is a serverless solution that requires a different kind of approach. Remember, when dealing with serverless, the availability of your functions is up in the air. You don't want to spawn too many connections and you don't want to attempt to use connections that don't exist. We resolved this by caching our connections and using the cached connection if available. Otherwise, spin up a new connection. Got a question or think you can improve upon this solution? Share it in the [MongoDB Community Forums!
md
{ "tags": [ "JavaScript", "Next.js", "Node.js" ], "pageDescription": "Learn how to build a Next.js application that leverages Vercel functions and MongoDB Atlas to create a serverless development experience.", "contentType": "Tutorial" }
Integrate MongoDB into Vercel Functions for the Serverless Experience
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/swift/swift-change-streams
created
# Working with Change Streams from Your Swift Application My day job is to work with our biggest customers to help them get the best out of MongoDB when creating new applications or migrating existing ones. Their use cases often need side effects to be run whenever data changes — one of the most common requirements is to maintain an audit trail. When customers are using MongoDB Atlas, it's a no-brainer to recommend Atlas Triggers. With triggers, you provide the code, and Atlas makes sure that it runs whenever the data you care about changes. There's no need to stand up an app server, and triggers are very efficient as they run alongside your data. Unfortunately, there are still some workloads that customers aren't ready to move to the public cloud. For these applications, I recommend change streams. Change streams are the underlying mechanism used by Triggers and many other MongoDB technologies — Kafka Connector, Charts, Spark Connector, Atlas Search, anything that needs to react to data changes. Using change streams is surprisingly easy. Ask the MongoDB Driver to open a change stream and it returns a database cursor. Listen to that cursor, and your application receives an event for every change in your collection. This post shows you how to use change streams from a Swift application. The principles are exactly the same for other languages. You can find a lot more on change streams at Developer Center. ## Running the example code I recently started using the MongoDB Swift Driver for the first time. I decided to build a super-simple Mac desktop app that lets you browse your collections (which MongoDB Compass does a **much** better job of) and displays change stream events in real time (which Compass doesn't currently do). You can download the code from the Swift-Change-Streams repo. Just build and run from Xcode. Provide your connection-string and then browse your collections. Select the "Enable change streams" option to display change events in real time. ### The code You can find this code in CollectionView.swift. We need a variable to store the change stream (a database cursor) ```swift @State private var changeStream: ChangeStream>? ``` as well as one to store the latest change event received from the change stream (this will be used to update the UI): ```swift @State private var latestChangeEvent: ChangeStreamEvent? ``` The `registerChangeStream` function is called whenever the user checks or unchecks the change stream option: ```swift func registerChangeStream() async { // If the view already has an active change stream, close it down if let changeStream = changeStream { _ = changeStream.kill() self.changeStream = nil } if enabledChangeStreams { do { let changeStreamOptions = ChangeStreamOptions(fullDocument: .updateLookup) changeStream = try await collection?.watch(options: changeStreamOptions) _ = changeStream?.forEach({ changeEvent in withAnimation { latestChangeEvent = changeEvent showingChangeEvent = true Task { await loadDocs() } } }) } catch { errorMessage = "Failed to register change stream: \(error.localizedDescription)" } } } ``` The function specifies what data it wants to see by creating a `ChangeStreamOptions` structure — you can see the available options in the Swift driver docs. In this app, I specify that I want to receive the complete new document (in addition to the deltas) when a document is updated. Note that the full document is always included for insert and replace operations. The code then calls `watch` on the current collection. Note that you can also provide an aggregation pipeline as a parameter named `pipeline` when calling `watch`. That pipeline can filter and reshape the events your application receives. Once the asynchronous watch function completes, the `forEach` loop processes each change event as it's received. When the loop updates our `latestChangeEvent` variable, the change is automatically propagated to the `ChangeEventView`: ```swift ChangeEventView(event: latestChangeEvent) ``` You can see all of the code to display the change event in `ChangeEventView.swift`. I'll show some highlights here. The view receives the change event from the enclosing view (`CollectionView`): ```swift let event: ChangeStreamEvent ``` The code looks at the `operationType` value in the event to determine what color to use for the window: ```swift var color: Color { switch event.operationType { case .insert: return .green case .update: return .yellow case .replace: return .orange case .delete: return .red default: return .teal } } ``` `documentKey` contains the `_id` value for the document that was changed in the MongoDB collection: ```swift if let documentKey = event.documentKey { ... Text(documentKey.toPrettyJSONString()) ... } } ``` If the database operation was an update, then the delta is stored in `updateDescription`: ```swift if let updateDescription = event.updateDescription { ... Text(updateDescription.updatedFields.toPrettyJSONString()) ... } } ``` The complete document after the change was applied in MongoDB is stored in `fullDocument`: ```swift if let fullDocument = event.fullDocument { ... Text(fullDocument.toPrettyJSONString()) ... } } ``` If the processing of the change events is a critical process, then you need to handle events such as your process crashing. The `_id.resumeToken` in the `ChangeStreamEvent` is a token that can be used when starting the process to continue from where you left off. Simply provide this token to the `resumeAfter` or `startAfter` options when opening the change stream. Note that this assumes that the events you've missed haven't rotated out of the Oplog. ### Conclusion Use Change Streams (or Atlas triggers, if you're able) to simplify your code base by decoupling the handling of side-effects from each place in your code that changes data. After reading this post, you've hopefully realized just how simple it is to create applications that react to data changes using MongoDB Change Streams. Questions? Comments? Head over to our Developer Community to continue the conversation!
md
{ "tags": [ "Swift", "MongoDB" ], "pageDescription": "Change streams let you run your own logic when data changes in your MongoDB collections. This post shows how to consume MongoDB change stream events from your Swift app.", "contentType": "Quickstart" }
Working with Change Streams from Your Swift Application
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/realm/realm-javascript-v11-react-native
created
# Realm JavaScript v11: A Step Forward for React Native — Hermes Support, Realm React, Flipper, and Much More After over a year of effort by the Realm JavaScript team, today, we are pleased to announce the release of Realm JavaScript version 11 — a complete re-imagining of the SDK and its APIs to be more idiomatic for React Native and JavaScript developers everywhere. With this release, we have built underlying support for the new Hermes JS engine, now becoming the standard for React Native applications everywhere. We have also introduced a new library for React developers, making integration with components, hooks, and context a breeze. We have built a Flipper plugin that makes inspecting, querying, and modifying a Realm within a React Native app incredibly fast. And finally, we have transitioned to a class-based schema definition to make creating your data model as intuitive as defining classes. Realm is a simple, fast, object-oriented database for mobile applications that does not require an ORM layer or any glue code to work with your data layer and is built from the ground up to work cross-platform, making React Native a natural fit. With Realm, working with your data is as simple as interacting with objects from your data model. Any updates to the underlying data store will automatically update your objects as soon as the state on disk has changed, enabling you to automatically refresh the view via React components, hooks, and context. Finally, Realm JavaScript comes with built-in synchronization to MongoDB Atlas — a cloud-managed database-as-a-service for MongoDB. The developer does not need to write any networking or conflict resolution code. All data transfer is done under the hood, abstracting thousands of lines of code away from the developer, and enabling them to build reactive mobile apps that can trigger UI updates automatically from server-side state changes. This delivers a performant and offline-tolerant mobile app because it always renders the state from disk. > **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free! ## Introduction  React Native has been a windfall for JavaScript developers everywhere by enabling them to write one code-base and deploy to multiple mobile targets — mainly iOS and Android — saving time and cost associated with maintaining multiple code bases. The React Native project has moved aggressively in the past years to solve mobile centric problems such as introducing a new JavaScript engine, Hermes, to solve the cold start problem and increase performance across the board. It has also introduced Fabric and TurboModules, which are projects designed to aid embedded libraries, such as Realm, which at its core is written in C++, to link into the JavaScript context. We believe these new developments from React Native are a great step forward for mobile developers and we have worked closely with the team to align our library to these new developments.  ## What is Realm? The Realm JavaScript SDK is built on three core concepts: * An object database that infers the schema from the developers’ class structure, making working with objects as easy as interacting with objects in code. No conversion code or ORM necessary. * Live objects, where the object reference updates as soon as the state changes and the UI refreshes — all built on top of Realm’s React library — enabling easy-to-use context, hooks, and components. * A columnar store where query results return immediately and integrate with an idiomatic query language that developers are familiar with. Realm is a fast, easy-to-use alternative to SQLite, that comes with a real-time edge to cloud sync solution out of the box. Written from the ground up in C++, it's not a wrapper around SQLite or any other relational data store and is designed with the mobile environment in mind. It's lightweight and optimizes for constraints like compute, memory, bandwidth, and battery that do not exist on the server side. Realm uses lazy loading and memory mapping. with each object reference pointing directly to the location on disk where the state is stored. This exponentially increases lookup and query speed as it eliminates the loading of state pages from disk into memory to perform calculations. It also reduces the amount of memory pressure on the device while working with the data layer. Realm makes it easy to store, query, and sync your mobile data across a plethora of devices and the back end. ## Realm for Javascript developers When Realm JavaScript was first implemented back in 2016, the only JavaScript engine available in React Native was JavaScript Core, which did not expose a way for embedded libraries such as Realm to integrate with. Since then, React Native has expanded their API to give developers the tools they need to work with third-party libraries directly in their mobile application code — most notably, the new Hermes JavaScript engine for React Native apps. After almost a year of effort, Realm JavaScript now runs through the JavaScript Interface (JSI), allowing us to support JavaScriptCore and, most importantly, Hermes — facilitating an exponentially faster app boot time and an intuitive debugging experience with Flipper. The Realm React library eliminates an incredible amount of boilerplate code that a developer would normally write in order to funnel data from the state store to the UI. With this library, Realm is directly integrated with React and comes with built-in hooks for accessing the query, write, and sync APIs. Previously, React Native developers would need to write this boilerplate themselves. By leveraging the new APIs from our React library, developers can save time and reduce bugs by leaving the Realm centric code to us. We have also added the ability for Realm’s objects to be React component aware, ensuring that re-renders are as efficient as possible in the component tree and freeing the developer from needing to write their own notification code. Lastly, we have harmonized Realm query results and lists with React Native ListView components, ensuring that individual items in lists re-render when they change — enabling a slick user experience. At its core, Realm has always endeavored to make working with the data layer as easy as working with language native objects, which is why your local database schema is inferred from your object definitions. In Realm JavaScript v11, we have now extended our existing functionality to fully support class based objects in JavaScript, aligning with users’ expectations of being able to call a constructor of a class-based model when wanting to create or add a new object. On top of this, we have done this not only in JavaScript but also with Typescript models, allowing developers to declare their types directly in the class definition, cutting out a massive amount of boilerplate code that a developer would need to write and removing a source of bugs while delivering type safety.  ``` /////////////////////////////////////////////////// // File: task.ts /////////////////////////////////////////////////// // Properties: // - _id: primary key, create a new value (objectId) when constructing a `Task` object // - description: a non-optional string // - isComplete: boolean, default value is false; the properties is indexed in the database to speed-up queries export class Task extends Realm.Object { _id = new Realm.BSON.ObjectId(); description!: string; @index isComplete = false; createdAt!: Date = () => new Date(); userId!: string; static primaryKey = "_id"; constructor(realm, description: string, userId: string) { super(realm, { description, userId }); } } export const TaskRealmContext = createRealmContext({ schema: [Task], }); /////////////////////////////////////////////////// // File: index.ts /////////////////////////////////////////////////// const App = () => " />; AppRegistry.registerComponent(appName, () => App); /////////////////////////////////////////////////// // File: appwrapper.tsx /////////////////////////////////////////////////// export const AppWrapper: React.FC<{ appId: string; }> = ({appId}) => { const {RealmProvider} = TaskRealmContext; return ( ); }; /////////////////////////////////////////////////// // File: app.tsx /////////////////////////////////////////////////// function TaskApp() { const app = useApp(); const realm = useRealm(); const [newDescription, setNewDescription] = useState("") const results = useQuery(Task); const tasks = useMemo(() => result.sorted('createdAt'), [result]); useEffect(() => { realm.subscriptions.update(mutableSubs => { mutableSubs.add(realm.objects(Task)); }); }, [realm, result]); return ( { realm.write(() => { new Task(realm, newDescription, app.currentUser.id); }); setNewDescription("") }}>➕ item._id.toHexString()} renderItem={({ item }) => { return ( realm.write(() => { item.isComplete = !item.isComplete }) }>{item.isComplete ? "✅" : "☑️"} {item.description} { realm.write(() => { realm.delete(item) }) }} >{"🗑️"} ); }} > ); } ``` ## Looking ahead The Realm JavaScript SDK is free, open source, and available for you to try out today. It can be used as an open-source local-only database for mobile apps or can be used to synchronize data to MongoDB Atlas with a generous free tier. The Realm JavaScript team is not done. As we look to the coming year, we will continue to refine our APIs to eliminate more boilerplate and do the heavy lifting for our users especially as it pertains to React 18, hook easily into developer tools like Expo, and explore expanding into other platforms such as web or Electron. Give it a try today and let us know what you think! Try out our tutorial, read our docs, and follow our repo.
md
{ "tags": [ "Realm", "JavaScript", "React Native" ], "pageDescription": "Today, we are pleased to announce the release of Realm JavaScript version 11— a complete re-imagining of the SDK and its APIs to be more idiomatic for React Native and JavaScript developers everywhere.", "contentType": "Article" }
Realm JavaScript v11: A Step Forward for React Native — Hermes Support, Realm React, Flipper, and Much More
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/atlas-multi-cloud-global-clusters
created
# Atlas Multi-Cloud Global Cluster: Always Available, Even in the Apocalypse! ## Introduction In recent years, "high availability" has been a buzzword in IT. Using this phrase usually means having your application and services resilient to any disruptions as much as possible. As vendors, we have to guarantee certain levels of uptime via SLA contracts, as maintaining high availability is crucial to our customers. These days, downtime, even for a short period of time, is widely unacceptable. MongoDB Atlas, our data as a service platform, has just the right solution for you! > > >If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post. > > ## Let's Go Global MongoDB Atlas offers a very neat and flexible way to deploy a global database in the form of a Global Sharded Cluster. Essentially, you can create Zones across the world where each one will have a shard, essentially a replica set. This allows you to read and write data belonging to each region from its local shard/s. To improve our network stability and overhead, Atlas provides a "Local reads in all Zones" button. It directs Atlas to automatically associate at least one secondary from each shard to one of the other regions. With an appropriate read preference, our application will now be able to get data from all regions without the need to query it cross-region. See our Atlas Replica Set Tags to better understand how to target local nodes or specific cloud nodes. MongoDB 4.4 introduced another interesting feature around read preferences for sharded clusters, called Hedged Reads. A hedged read query is run across two secondaries for each shard and returns the fastest response. This can allow us to get a fast response even if it is served from a different cloud member. Since this feature is allowed for `non-Primary` read preferences (like `nearest`), it should be considered to be eventually consistent. This should be taken into account with your consistency considerations. ## Let's Go Multi-Cloud One of the latest breakthroughs the Atlas service presented is being able to run a deployment across cloud vendors (AWS, Azure, and GCP). This feature is now available also in Global Clusters configurations. We are now able to have shards spanning multiple clouds and regions, in one cluster, with one unified connection string. Due to the smart tagging of the replica set and hosts, we can have services working isolated within a single cloud, or benefit from being cloud agnostic. To learn more about Multi-Cloud clusters, I suggest you read a great blog post, Create a Multi-Cloud Cluster with MongoDB Atlas, written by my colleague, Adrienne Tacke. ## What is the Insurance Policy We've Got? When you set up a Global Cluster, how it is configured will change the availability features. As you configure your cluster, you can immediately see how your configuration covers your resiliency, HA, and Performance requirements. It's an awesome feature! Let's dive into the full set: ##### Zone Configuration Check-list | Ability | Description | Feature that covers it | | --- | --- | --- | | Low latency read and writes in \ | Having a Primary in each region allows us to query/write data within the region. | Defining a zone in a region covers this ability. | | Local reads in all zones | If we want to query a local node for another zone data (e.g., in America, query for Europe documents), we need to allow each other zone to place at least one secondary in the local region (e.g., Europe shard will have one secondary in America region). This requires our reads to use a latency based `readPreference` such as `nearest` or `hedged`. If we do not have a local node we will need to fetch the data remotely. | Pressing the "Allow local reads in all zones" will place one secondary in each other zone. | | Available during partial region outage | In case there is a cloud "availability zone" outage within a specific region, regions with more than one availability zone will allow the region to still function as normal. | Having the preferred region of the zone with a number of electable nodes span across two or more availability zones of the cloud provider to withstand an availability zone outage. Those regions will be marked with a star in the UI. For example: two nodes in AWS N. Virginia where each one is, by design, deployed over three potential availability zones. | | Available during full region outage | In case there is a full cloud region outage, we need to have a majority of nodes outside this region to maintain a primary within the | Having a majority of "Electable" nodes outside of the zone region. For example: two nodes in N. Virginia, two nodes in N. California, and one node in Ireland | | Available during full cloud provider outage | If a whole cloud provider is unavailable, the zones still have a majority of electable nodes on other cloud providers, and so the zones are not dependent on one cloud provider. | Having multi-cloud nodes in all three clouds will allow you to withstand one full cloud provider failure. For example: two nodes on AWS N.Virginia, two nodes on GCP Frankfurt, and one node on Azure London. | ## Could the Apocalypse Not Scare Our Application? After we have deployed our cluster, we now have a fully global cross-region, cross-cloud, fault-tolerant cluster with low read and write latencies across the globe. All this is accessed via a simple unified SRV connection string: ``` javascript "mongodb+srv://user:[email protected]/test?w=majority" ``` This cluster comes with a full backup and point in time restore option, in case something **really** horrible happens (like human mistakes...). I don't think that our application has anything to fear, other than its own bugs. To show how easy it is to manipulate this complex deployment, I YouTubed it: > > >:youtube]{vid=pbhWjNVKMfg} > >To learn more about how to deploy a cross region global cluster to cover all of our fault tollerence best practices, check out the video. > > ## Wrap-Up Covering our global application demand and scale has never been easier, while keeping the highest possible availability and resiliency. Global multi-cloud clusters allow IT to sleep well at night knowing that their data is always available, even in the apocalypse! > > >If you have questions, please head to our [developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. > >
md
{ "tags": [ "Atlas" ], "pageDescription": "Learn how to build Atlas Multi-Cloud Global Cluster: Always available, even in the apocalypse!", "contentType": "Article" }
Atlas Multi-Cloud Global Cluster: Always Available, Even in the Apocalypse!
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/realm/announcing-realm-flutter-sdk
created
# Announcing the GA of the Realm Flutter SDK After over a year since our first official release, we are excited to announce the general availability of our Realm Flutter SDK. The team has made dozens of releases, merged hundreds of PRs, and squashed thousands of bugs — many of them raised by you, our community, that has guided us through this preview period. We could not have done it without your feedback and testing. The team also worked in close partnership with the Dart and Google Cloud teams to make sure our Flutter SDK followed best practices. Their guidance was essential to stabilizing our low-level API for 1.0. You can read more about our collaboration with the Dart team on their blog here. Realm is a simple and fast object-oriented database for mobile applications that does not require an ORM layer or any glue code to work with your data layer. With Realm, working with your data is as simple as interacting with objects from your data model. Any updates to the underlying data store will automatically update your objects as soon as the state on disk has changed, enabling you to automatically refresh the view via StatefulWidgets and Streams. With this 1.0 release we have solidified the foundation of our Realm Flutter SDK and stabilized the API in addition to adding features around schema definitions such as support for migrations and new types like lists of primitives, embedded objects, sets, and a RealmValue type, which can contain a mix of any valid type. We’ve also enhanced the API to support asynchronous writes and frozen objects as well as introducing a writeCopy API for converting realm files in code bringing it up to par with our other SDKs. Finally, the Realm Flutter SDK comes with built-in data synchronization to MongoDB Atlas — a cloud-managed database-as-a-service for MongoDB. The developer does not need to write any networking or conflict resolution code. All data transfer is done under the hood, abstracting away thousands of lines of code for handling offline state and network availability, and enabling developers to build reactive mobile apps that can trigger UI updates automatically from server-side state changes. This delivers a performant and offline-tolerant mobile app because it always renders the state from disk. > **Live-code with us** > > Join us live to build a Flutter mobile app from scratch! Senior Software Engineer Kasper Nielsen walks us through setting up a new Flutter app with local storage via Realm and cloud-syncing via Atlas Device Sync. Register here. ## Why Realm? All of Realm’s SDKs are built on three core concepts: * An object database that infers the schema from the developers’ class structure — making working with objects as easy as interacting with their data layer. No conversion code necessary. * Live objects so the developer has a simple way to update their UI — integrated with StatefulWidgets and Streams. * A columnar store so that query results return in lightning speed and directly integrate with an idiomatic query language the developer prefers. Realm is a database designed for mobile applications as a replacement for SQLite. It was written from the ground up in C++, so it is not a wrapper around SQLite or any other relational datastore. Designed with the mobile environment in mind, it is lightweight and optimizes for constraints like compute, memory, bandwidth, and battery that do not exist on the server side. Realm uses lazy loading and memory mapping with each object reference pointing directly to the location on disk where the state is stored. This exponentially increases lookup and query speed as it eliminates the loading of pages of data into memory to perform calculations. It also reduces the amount of memory pressure on the device while working with the data layer. ***Build better mobile apps with Atlas Device Sync:*** *Atlas Device Sync is a fully-managed mobile backend as a service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!* ## Enhancements to the Realm Flutter SDK The greatest enhancements for the GA of the SDK surround the modeling of the schema — giving developers enhanced expressiveness and flexibility when it comes to building your data classes for your application. First, the SDK includes the ability to have embedded objects — this allows you to declare an object as owned by a parent object and attach its lifecycle to the parent. This enables a cascading delete when deleting a parent object because it will also delete any embedded objects. It also frees the developer from writing cleanup code to perform this operation manually. A Set has also been added, which enables a developer to have a collection of any type of elements where uniqueness is automatically enforced along with a variety of methods that can operate on a set. Finally, the flexibility is further enhanced with the addition of RealmValue, which is a mixed type that allows a developer to insert any valid type into the collection or field. This is useful when the developer may not know the type of a value that they are receiving from an API but needs to store it for later manipulation. The new SDK also contains ergonomic improvements to the API to make manipulating the data and integrating into the Flutter ecosystem seamless. The writeCopy API allows you to make a copy of a Realm database file to bundle with your application install and enables you to convert from a non-sync to sync with Realm and vice versa. Frozen objects give developers the option to make a snapshot of the state at a certain point in time, making it simple to integrate into a unidirectional data flow pattern or library such as BLoC. Lastly, the writeAsync API introduces a convenient method to offload work to the background and preserve execution of the main thread. ``` // Define your schema in your object model - here a 1 to 1 relationship @RealmModel() class _Car { late String make; String? model; int? kilometers = 500; _Person? owner; } // Another object in the schema. A person can have a Set of cars and data can be of any type @RealmModel() class _Person { late String name; int age = 1; late Set<_Car> cars; late RealmValue data; } void main(List arguments) async { final config = Configuration.local(Car.schema, Person.schema]); final realm = Realm(config); // Create some cars and add them to your person object final person = Person("myself", age: 18); person.cars.add(Car("Tesla", model: "Model Y", kilometers: 818)); person.cars.add(Car("Audi", model: "A4", kilometers: 12)); person.data = RealmValue.bool(true); // Dispatch the write to the background to not block the UI await realm.writeAsync(() { realm.add(person); }); // Listen for any changes to the underlying data - useful for updating the UI person.cars.changes.listen((e) { print("set of cars changed"); }); // Add some more any type value to the data field realm.write(() { person.data = RealmValue.string("Realm is awesome"); }); realm.close(); } ``` ## Looking ahead The Realm Flutter SDK is free, [open source, and available for you today. We believe that with the GA of the SDK, Flutter developers can easily build an application that seamlessly integrates into Dart’s language primitives and have the confidence to launch their app to production. Thousands of developers have already tried the SDK and many have already shipped their app to the public. Two such companies are Aupair Valley and Dot On. Aupair Valley is a mobile social media platform that helps connect families and au pairs worldwide. The app’s advanced search algorithm facilitates the matching between families and au pairs. They can find and connect with each other and view information about their background. The app also enables chat functionality to set up a meeting. Aupair Valley selected Flutter so that they could easily iterate on both an Android and iOS app in the same codebase, while Realm and Device Sync played an essential role in the development of Aupair Valley by abstracting away the data layer and networking that any two-sided market app requires. Built on Google Cloud and MongoDB Atlas, Aupair Valley drastically reduced development time and costs by leveraging built-in functionality on these platforms. Dot On is a pioneering SaaS Composable Commerce Platform for small and midsize enterprises, spanning Product Management and Order Workflow Automation applications. Native connectors with Brightpearl by Sage and Shopify Plus further enrich capabilities around global data syndication and process automation through purpose-built, deep integrations. With Dot On’s visionary platform, brands are empowered with digital freedom to deliver exceptional and unique customer experiences that exceed expectations in this accelerating digital world. Dot On chose Realm and MongoDB Atlas for their exceptional and innovative technology fused with customer success that is central to Dot On’s core values. To meet this high bar, it was essential to select a vendor whose native-application database solution was tried and tested, highly scalable, and housed great flexibility around data architecture all while maintaining very high standards over security, privacy and compliance. “Realm and Device Sync is a perfect fit and has accelerated our development. Dot On’s future is incredibly exciting and we look forward to our continued relationship with MongoDB who have been highly supportive from day one.” -Jon Petrie, CEO, Dot On. The future is bright for Flutter at Realm and MongoDB. Our roadmap will continue to evolve by adding new types such as Decimal128 and Maps, along with additional MongoDB data access APIs, and deepening our integration into the Flutter framework with guidance and convenience APIs for even simpler integrations into state management and streams. Stay tuned! Give it a try today and let us know what you think! Check out our samples, read our docs, and follow our repo. > **Live-code with us** > >Join us live to build a Flutter mobile app from scratch! Senior Software Engineer Kasper Nielsen walks us through setting up a new Flutter app with local storage via Realm and cloud-syncing via Atlas Device Sync. Register here.
md
{ "tags": [ "Realm", "Flutter" ], "pageDescription": "After over a year since our first official release, we are excited to announce the general availability of our Realm Flutter SDK.", "contentType": "Article" }
Announcing the GA of the Realm Flutter SDK
2024-05-20T17:32:23.500Z
devcenter
https://www.mongodb.com/developer/products/atlas/easy-deployment-mean-stack
created
# Easy Deployment of MEAN Stack with MongoDB Atlas, Cloud Run, and HashiCorp Terraform *This article was originally written by Aja Hammerly and Abirami Sukumaran, developer advocates from Google.* Serverless computing promises the ability to spend less time on infrastructure and more time on developing your application. But historically, if you used serverless offerings from different vendors you didn't see this benefit. Instead, you often spent a significant amount of time configuring the different products and ensuring they can communicate with each other. We want to make this easier for everyone. We've started by using HashiCorp Terraform to make it easier to provision resources to run the MEAN stack on Cloud Run with MongoDB Atlas as your database. If you want to try it out, our GitHub repository is here: https://github.com/GoogleCloudPlatform/terraform-mean-cloudrun-mongodb ## MEAN Stack Basics If you aren't familiar, the MEAN stack is a technology stack for building web applications. The MEAN stack is composed of four main components—MongoDB, Express, Angular, and Node.js. * MongoDB is responsible for data storage * Express.js is a Node.js web application framework for building APIs * Angular is a client-side JavaScript platform * Node.js is a server-side JavaScript runtime environment. The server uses the MongoDB Node.js driver to connect to the database and retrieve and store data Our project runs the MEAN stack on Cloud Run (Express, Node) and MongoDB Atlas (MongoDB). The repository uses a sample application to make it easy to understand all the pieces. In the sample used in this experiment, we have a client and server application packaged in individual containers each that use the MongoDB-Node.js driver to connect to the MongoDB Atlas database. Below we'll talk about how we used Terraform to make deploying and configuring this stack easier for developers and how you can try it yourself. ## Required One-Time Setup To use these scripts, you'll need to have both MongoDB Atlas and Google Cloud accounts. ### MongoDB Atlas Setup 1. Login with your MongoDB Atlas Account. 2. Once you're logged in, click on "Access Manager" at the top and select "Organization Access" 3. Select the "API Keys" tab and click the "Create API Key" button 4. Give your new key a short description and select the "Organization Owner" permission 5. Click "Next" and then make a note of your public and private keys 6. Next, you'll need your Organization ID. In the left navigation menu, click “Settings”.  7. Locate your Organization ID and copy it. That's everything for Atlas. Now you're ready to move on to setting up Google Cloud! ### Google Cloud Tooling and Setup You'll need a billing account setup on your Google Cloud account and to make note of your Billing Account ID. You can find your Billing Account ID on the billing page. You'll also need to pick a region for your infrastructure. Note that Google Cloud and Atlas use different names for the same region. You can find a mapping between Atlas regions and Google Cloud regions here. You'll need a region that supports the M0 cluster tier. Choose a region close to you and make a note of both the Google Cloud and Atlas region names. Finally, you'll need a terminal with the Google Cloud CLI (gcloud) and Terraform installed. You can use your workstation or try Cloud Shell, which has these tools already installed. To get started in Cloud Shell with the repo cloned and ready to configure, click here. ### Configuring the Demo If you haven't already, clone this repo. Run `terraform init` to make sure Terraform is working correctly and download the provider plugins. Then, create a file in the root of the repository called `terraform.tfvars` with the following contents, replacing placeholders as necessary: *atlas\_pub\_key          = "\"* *atlas\_priv\_key         = "\"* *atlas\_org\_id           = "\"* *google\_billing\_account = "\"* If you selected the *us-central1/US\_CENTRAL* region then you're ready to go. If you selected a different region, add the following to your `terraform.tfvars ` file: atlas\_cluster\_region = "\" google\_cloud\_region  = "\" Run terraform init again to make sure there are no new errors. If you get an error, check your terraform.tfvars file. ### Deploy the Demo You're ready to deploy! You have two options: you can run `terraform plan` to see a full listing of everything that Terraform wants to do without any risk of accidentally creating those resources. If everything looks good, you can then run `terraform apply` to execute the plan. Alternately, you can just run terraform apply on its own and it will create a plan and display it before prompting you to continue. You can learn more about the plan and apply commands in this tutorial. For this demo, we're going to just run `terraform apply`: If everything looks good to you, type yes and press enter. This will take a few minutes. When it's done, Terraform will display the URL of your application: Open that URL in your browser and you'll see the sample app running. ### Cleaning Up When you're done, run terraform destroy to clean everything up: If you're sure you want to tear everything down, type yes and press enter. This will take a few minutes. When Terraform is done everything it created will have been destroyed and you will not be billed for any further usage. ## Next Steps You can use the code in this repository to deploy your own applications. Out of the box, it will work with any application that runs in a single container and reads the MongoDB connection string from an environment variable called ATLAS\_URI, but the Terraform code can easily be modified if you have different needs or to support more complex applications. For more information please refer to the Next Steps section of the readme.
md
{ "tags": [ "Atlas", "Node.js", "Google Cloud", "Terraform" ], "pageDescription": "Learn about using HashiCorp Terraform to make it easier to provision resources to run the MEAN stack on Cloud Run with MongoDB Atlas as your database. ", "contentType": "Article" }
Easy Deployment of MEAN Stack with MongoDB Atlas, Cloud Run, and HashiCorp Terraform
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/kafka-mongodb-atlas-tutorial
created
# Kafka to MongoDB Atlas End to End Tutorial Data and event-driven applications are in high demand in a large variety of industries. With this demand, there is a growing challenge with how to sync the data across different data sources. A widely adopted solution for communicating real-time data transfer across multiple components in organization systems is implemented via clustered queues. One of the popular and proven solutions is Apache Kafka. The Kafka cluster is designed for streams of data that sequentially write events into commit logs, allowing real-time data movement between your services. Data is grouped into topics inside a Kafka cluster. MongoDB provides a Kafka connector certified by Confluent, one of the largest Kafka providers. With the Kafka connector and Confluent software, you can publish data from a MongoDB cluster into Kafka topics using a source connector. Additionally, with a sink connector, you can consume data from a Kafka topic to persist directly and consistently into a MongoDB collection inside your MongoDB cluster. In this article, we will provide a simple step-by-step guide on how to connect a remote Kafka cluster—in this case, a Confluent Cloud service—with a MongoDB Atlas cluster. For simplicity purposes, the installation is minimal and designed for a small development environment. However, through the article, we will provide guidance and links for production-related considerations. > **Pre-requisite**: To avoid JDK known certificate issues please update your JDK to one of the following patch versions or newer: > - JDK 11.0.7+ > - JDK 13.0.3+ > - JDK 14.0.2+ ## Table of Contents 1. Create a Basic Confluent Cloud Cluster 1. Create an Atlas Project and Cluster 1. Install Local Confluent Community Binaries to Run a Kafka Connect Instance 1. Configure the MongoDB Connector with Kafka Connect Locally 1. Start and Test Sink and Source MongoDB Kafka Connectors 1. Summary ## Create a Basic Confluent Cloud Cluster We will start by creating a basic Kafka cluster in the Confluent Cloud. Once ready, create a topic to be used in the Kafka cluster. I created one named “orders.” This “orders” topic will be used by Kafka Sink connector. Any data in this topic will be persisted automatically in the Atlas database. You will also need another topic called "outsource.kafka.receipts". This topic will be used by the MongoDB Source connector, streaming reciepts from Atlas database. Generate an `api-key` and `api-secret` to interact with this Kafka cluster. For the simplicity of this tutorial, I have selected the “Global Access” api-key. For production, it is recommended to give as minimum permissions as possible for the api-key used. Get a hold of the generated keys for future use. Obtain the Kafka cluster connection string via `Cluster Overview > Cluster Settings > Identification > Bootstrap server` for future use. Basic clusters are open to the internet and in production, you will need to amend the access list for your specific hosts to connect to your cluster via advanced cluster ACLs. ## Create a MongoDB Atlas Project and Cluster Create a project and cluster or use an existing Atlas cluster in your project. Prepare your Atlas cluster for a kafka-connect connection. Inside your project’s access list, enable user and relevant IP addresses of your local host, the one used for Kafka Connect binaries. Finally, get a hold of the Atlas connection string for future use. ## Install a Kafka Connect Worker Kafka Connect is one of the mechanisms to reliably stream data between different data systems and a Kafka cluster. For production use, we recommend using a distributed deployment for high availability, fault tolerance, and scalability. There is also a cloud version to install the connector on the Confluent Cloud. For this simple tutorial, we will use a standalone local Kafka Connect installation. To have the binaries to install kafka-connect and all of its dependencies, let’s download the files: ```shell curl -O http://packages.confluent.io/archive/7.0/confluent-community-7.0.1.tar.gz tar -xvf confluent-community-7.0.1.tar.gz ``` ## Configure Kafka Connect Configure the plugins directory where we will host the MongoDB Kafka Connector plugin: ```shell mkdir -p /usr/local/share/kafka/plugins ``` Edit the `/etc/schema-registry/connect-avro-standalone.properties` using the content provided below. Ensure that you replace the `:` with information taken from Confluent Cloud bootstrap server earlier. Additionally, replace the generated `` and `` taken from Confluent Cloud in every section. ``` bootstrap.servers=: Connect data. Every Connect user will # need to configure these based on the format they want their data in when loaded from or stored into Kafka key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter # Converter-specific settings can be passed in by prefixing the Converter's setting with the converter you want to apply # it to key.converter.schemas.enable=false value.converter.schemas.enable=false # The internal converter used for offsets and config data is configurable and must be specified, but most users will # always want to use the built-in default. Offset and config data is never visible outside of Kafka Connect in this format. internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false # Store offsets on local filesystem offset.storage.file.filename=/tmp/connect.offsets # Flush much faster than normal, which is useful for testing/debugging offset.flush.interval.ms=10000 ssl.endpoint.identification.algorithm=https sasl.mechanism=PLAIN request.timeout.ms=20000 retry.backoff.ms=500 sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="" password=""; security.protocol=SASL_SSL consumer.ssl.endpoint.identification.algorithm=https consumer.sasl.mechanism=PLAIN consumer.request.timeout.ms=20000 consumer.retry.backoff.ms=500 consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="" password=""; consumer.security.protocol=SASL_SSL producer.ssl.endpoint.identification.algorithm=https producer.sasl.mechanism=PLAIN producer.request.timeout.ms=20000 producer.retry.backoff.ms=500 producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="" password=""; producer.security.protocol=SASL_SSL plugin.path=/usr/local/share/kafka/plugins ``` **Important**: Place the `plugin.path` to point to our plugin directory with permissions to the user running the kafka-connect process. ### Install the MongoDB connector JAR: Download the “all” jar and place it inside the plugin directory. ```shell cp ~/Downloads/mongo-kafka-connect-1.6.1-all.jar /usr/local/share/kafka/plugins/ ``` ### Configure a MongoDB Sink Connector The MongoDB Sink connector will allow us to read data off a specific Kafka topic and write to a MongoDB collection inside our cluster. Create a MongoDB sink connector properties file in the main working dir: `mongo-sink.properties` with your Atlas cluster details replacing `:@/` from your Atlas connect tab. The working directory can be any directory that the `connect-standalone` binary has access to and its path can be provided to the `kafka-connect` command shown in "Start Kafka Connect and Connectors" section. ``` name=mongo-sink topics=orders connector.class=com.mongodb.kafka.connect.MongoSinkConnector tasks.max=1 connection.uri=mongodb+srv://:@/?retryWrites=true&w=majority database=kafka collection=orders max.num.retries=1 retries.defer.timeout=5000 ``` With the above configuration, we will listen to the topic called “orders” and publish the input documents into database `kafka` and collection name `orders`. ### Configure Mongo Source Connector The MongoDB Source connector will allow us to read data off a specific MongoDB collection topic and write to a Kafka topic. When data will arrive into a collection called `receipts`, we can use a source connector to transfer it to a Kafka predefined topic named “outsource.kafka.receipts” (the configured prefix followed by the `.` name as a topic—it's possible to use advanced mapping to change that). Let’s create file `mongo-source.properties` in the main working directory: ``` name=mongo-source connector.class=com.mongodb.kafka.connect.MongoSourceConnector tasks.max=1 # Connection and source configuration connection.uri=mongodb+srv://:@/?retryWrites=true&w=majority database=kafka collection=receipts topic.prefix=outsource topic.suffix= poll.max.batch.size=1000 poll.await.time.ms=5000 # Change stream options pipeline=] batch.size=0 change.stream.full.document=updateLookup publish.full.document.only=true collation= ``` The main properties here are the database, collection, and aggregation pipeline used to listen for incoming changes as well as the connection string. The `topic.prefix` adds a prefix to the `.` namespace as the Kafka topic on the Confluent side. In this case, the topic name that will receive new MongoDB records is “outsource.kafka.receipts” and was predefined earlier in this tutorial. I have also added `publish.full.document.only=true` as I only need the actual document changed or inserted without the change stream event wrapping information. ### Start Kafka Connect and Connectors For simplicity reasons, I am running the standalone Kafka Connect in the foreground. ``` ./confluent-7.0.1/bin/connect-standalone ./confluent-7.0.1/etc/schema-registry/connect-avro-standalone.properties mongo-sink.properties mongo-source.properties ``` > **Important**: Run with the latest Java version to avoid JDK SSL bugs. Now every document that will be populated to topic “orders” will be inserted into the `orders` collection using a sink connector. A source connector we configured will transmit every receipt document from `receipt` collection back to another topic called "outsource.kafka.receipts" to showcase a MongoDB consumption to a Kafka topic. ## Publish Documents to the Kafka Queue Through the Confluent UI, I have submitted a test document to my “orders” topic. ![Produce data into "orders" topic ### Atlas Cluster is Being Automatically Populated with the Data Looking into my Atlas cluster, I can see a new collection named `orders` in the `kafka` database. Now, let's assume that our application received the order document from the `orders` collection and produced a receipt. We can replicate this by inserting a document in the `kafka.reciepts` collection: This operation will cause the source connector to produce a message into “outsource.kafka.reciepts” topic. ### Kafka "outsource.kafka.reciepts" Topic Log lines on kafka-connect will show that the process received and published the document: ``` 2021-12-14 15:31:18,376] INFO [mongo-source|task-0] [Producer clientId=connector-producer-mongo-source-0] Cluster ID: lkc-65rmj (org.apache.kafka.clients.Metadata:287) [2021-12-14 15:31:18,675] INFO [mongo-source|task-0] Opened connection [connectionId{localValue:21, serverValue:99712}] to dev-shard-00-02.uvwhr.mongodb.net:27017 (org.mongodb.driver.connection:71) [2021-12-14 15:31:18,773] INFO [mongo-source|task-0] Started MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask:203) [2021-12-14 15:31:18,773] INFO [mongo-source|task-0] WorkerSourceTask{id=mongo-source-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:233) [2021-12-14 15:31:27,671] INFO [mongo-source|task-0|offsets] WorkerSourceTask{id=mongo-source-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:505 [2021-12-14 15:31:37,673] INFO [mongo-source|task-0|offsets] WorkerSourceTask{id=mongo-source-0} flushing 1 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:505) ``` ## Summary In this how-to article, I have covered the fundamentals of building a simple yet powerful integration of MongoDB Atlas to Kafka clusters using MongoDB Kafka Connector and Kafka Connect. This should be a good starting point to get you going with your next event-driven application stack and a successful integration between MongoDB and Kafka. Try out [MongoDB Atlas and Kafka connector today!
md
{ "tags": [ "MongoDB", "Java", "Kafka" ], "pageDescription": "A simple step-by-step tutorial on how to use MongoDB Atlas with a Kafka Connector and connect it to any Remote Kafka Cluster.", "contentType": "Tutorial" }
Kafka to MongoDB Atlas End to End Tutorial
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/realm/oauth-and-realm-serverless
created
# OAuth & MongoDB Realm Serverless Functions I recently had the opportunity to work with Lauren Schaefer and Maxime Beugnet on a stats tracker for some YouTube statistics that we were tracking manually at the time. I knew that to access the YouTube API, we would need to authenticate using OAuth 2. I also knew that because we were building the app on MongoDB Realm Serverless functions, someone would probably need to write the implementation from scratch. I've dealt with OAuth before, and I've even built client implementations before, so I thought I'd volunteer to take on the task of implementing this workflow. It turned out to be easier than I thought, and because it's such a common requirement, I'm documenting the process here, in case you need to do the same thing. This post assumes that you've worked with MongoDB Realm Functions in the past, and that you're comfortable with the concepts around calling REST-ish APIs. But first... ## What the Heck is OAuth 2? OAuth 2 is an authorization protocol which allows unrelated servers to allow authenticated access to their services, without sharing user credentials, such as your login password. What this means in this case is that YouTube will allow my Realm application to operate *as if it was logged in as a MongoDB user*. There are some extra features for added control and security, like the ability to only allow access to certain functionality. In our case, the application will only need read-only access to the YouTube data, so there's no need to give it permission to delete MongoDB's YouTube videos! ### What Does it Look Like? Because OAuth 2 doesn't transmit the user's login credentials, there is some added complexity to make this work. From the user's perspective, it looks like this: 1. The user clicks on a button (or in my minimal implementation, they type in a specific URL), which redirects the browser to the authorizing service—in this case YouTube. 2. The authorizing service asks the user to log in, if necessary. 3. The authorizing service asks the user to approve the request to allow the Realm app to make requests to the YouTube API on their behalf. 4. If the user approves, then the browser redirects back to the Realm application, but with an extra parameter added to the URL containing a code which can be used to obtain access tokens. Behind the scenes, there's a Step 5, where the Realm service makes an extra HTTPS request to the YouTube API, using the code provided in Step 4, requesting an access token and a refresh token. Access tokens are only valid for an hour. When they expire, a new access token can be requested from YouTube, using the refresh token, which only expires if it hasn't been used for six months! If this sounds complicated, that's because it is! If you look more closely at the diagram above, though, you can see that there are only actually two requests being made by the browser to the Realm app, and only one request being made by the Realm app directly to Google. As long as you implement those three things, you'll have implemented the OAuth's full authorization flow. Once the authorization flow has been completed by the appropriate user (a user who has permission to log in as the MongoDB organization), as long as the access token is refreshed using the refresh token, API calls can be made to the YouTube API indefinitely. ## Setting Up the Necessary Accounts You'll need to create a Realm app and an associated Google project, and link the two together. There are quite a few steps, so make sure you don't miss any! ### Create a Realm App Go to and log in if necessary. I'm going to assume that you have already created a MongoDB Atlas cluster, and an associated Realm App. If not, follow the steps described in the MongoDB documentation. ### Create a Google API Project This flow is loosely applicable to any OAuth service, but I'll be working with Google's YouTube API. The first thing to do is to create a project in the Google API Console that is analogous to your Realm app. Go to . Click the projects list (at the top-left of the screen), then click the "Create Project" button, and enter a name. I entered "DREAM" because that's the funky acronym we came up with for the analytics monitor project my team was working on. Select the project, then click the radio button that says "External" to make the app available to anyone with a Google account, and click "Create" to finish creating your project. Ignore the form that you're presented with for now. On the left-hand side of the screen, click "Library" and in the search box, enter "YouTube" to filter Google's enormous API list. Select each of the APIs you wish to use—I selected the YouTube Data API and the YouTube Analytics API—and click the "Enable" button to allow your app to make calls to these APIs. Now, select "OAuth consent screen" from the left-hand side of the window. Next to the name of your app, click "Edit App." You'll be taken to a form that will allow you to specify how your OAuth consent screens will look. Enter a sensible app name, your email address, and if you want to, upload a logo for your project. You can ignore the "App domain" fields for now. You'll need to enter an Authorized domain by clicking "Add Domain" and enter "mongodb-realm.com" (without the quotes!). Enter your email address under "Developer contact information" and click "Save and Continue." In the table of scopes, check the boxes next to the scopes that end with "youtube.readonly" and "yt-analytics.readonly." Then click "Update." On the next screen, click "Save and Continue" to go to the "Test users" page. Because your app will be in "testing" mode while you're developing it, you'll need to add the email addresses of each account that will be allowed to authenticate with it, so I added my email address along with those of my team. Click "Save and Continue" for a final time and you're done configuring the OAuth consent screen! A final step is to generate some credentials your Realm app can use to prove to the Google API that the requests come from where they say they do. Click on "Credentials" on the left-hand side of the screen, click "Create Credentials" at the top, and select "OAuth Client ID." The "Application Type" is "Web application." Enter a "Name" of "Realm App" (or another useful identifier, if you prefer), and then click "Create." You'll be shown your client ID and secret values. Leave them up on the screen, and *in a different tab*, go to your Realm app and select "Values" from the left side. Click the "Create New Value" button, give it a name of "GOOGLE_CLIENT_ID," select "Value," and paste the client ID into the content text box. Repeat with the client secret, but select "Secret," and give it the name "GOOGLE_CLIENT_SECRET." You'll then be able to access these values with code like context.values.get("GOOGLE_CLIENT_ID") in your Realm function. Once you've got the values safely stored in your Realm App, you've now got everything you need to authorize a user with the YouTube Analytics API. ## Let's Write Some Code! To create an HTTP endpoint, you'll need to create an HTTP service in your Realm App. Go to your Realm App, select "3rd Party Services" on the left side, and then click the "Add a Service" button. Select HTTP and give it a "Service Name." I chose "google_oauth." A webhook function is automatically created for you, and you'll be taken to its settings page. Give the webhook a name, like "authorizor," and set the "HTTP Method" to "GET." While you're here, you should copy the "Webhook URL." Go back to your Google API project, "Credentials," and then click on the Edit (pencil) button next to your Realm app OAuth client ID. Under "Authorized redirect URIs," click "Add URI," paste the URI into the text box, and click "Save." Go back to your Realm Webhook settings, and click "Save" at the bottom of the page. You'll be taken to the function editor, and you'll see that some sample code has been inserted for you. Replace it with the following skeleton: ``` javascript exports = async function (payload, response) { const querystring = require('querystring'); }; ``` Because the function will be making outgoing HTTP calls that will need to be awaited, I've made it an async function. Inside the function, I've required the querystring library because the function will also need to generate query strings for redirecting to Google. After the require line, paste in the following constants, which will be required for authorizing users with Google: ``` javascript // https://developers.google.com/youtube/v3/guides/auth/server-side-web-apps#httprest const GOOGLE_OAUTH_ENDPOINT = "https://accounts.google.com/o/oauth2/v2/auth" const GOOGLE_TOKEN_ENDPOINT = "https://oauth2.googleapis.com/token"; const SCOPES = "https://www.googleapis.com/auth/yt-analytics.readonly", "https://www.googleapis.com/auth/youtube.readonly", ]; ``` Add the following lines, which will obtain values for the Google credentials client ID and secret, and also obtain the URL for the current webhook call: ``` javascript // Following obtained from: https://console.developers.google.com/apis/credentials const CLIENT_ID = context.values.get("GOOGLE_CLIENT_ID"); const CLIENT_SECRET = context.values.get("GOOGLE_CLIENT_SECRET"); const OAUTH2_CALLBACK = context.request.webhookUrl; ``` Once this is done, the code should check to see if it's being called via a Google redirect due to an error. This is the case if it's called with an `error` parameter. If that's the case, a good option is to log the error and display it to the user. Add the following code which does this: ``` javascript const error = payload.query.error; if (typeof error !== 'undefined') { // Google says there's a problem: console.error("Error code returned from Google:", error); response.setHeader('Content-Type', 'text/plain'); response.setBody(error); return response; } ``` Now to implement Step 1 of the authorization flow illustrated at the start of this post! When the user requests this webhook URL, they won't provide any parameters, whereas when Google redirects to it, the URL will include a `code` parameter. So, by checking if the code parameter is absent, you can ensure that we're this is the Step 1 call. Add the following code: ``` javascript const oauthCode = payload.query.code; if (typeof oauthCode === 'undefined') { // No code provided, so let's request one from Google: const oauthURL = new URL(GOOGLE_OAUTH_ENDPOINT); oauthURL.search = querystring.stringify({ 'client_id': CLIENT_ID, 'redirect_uri': OAUTH2_CALLBACK, 'response_type': 'code', 'scope': SCOPES.join(' '), 'access_type': "offline", }); response.setStatusCode(302); response.setHeader('Location', oauthURL.href); } else { // This empty else block will be filled in below. } ``` The code above adds the appropriate parameters to the Google OAuth endpoint described in their [OAuth flow documentation, and then redirects the browser to this endpoint, which will display a consent page to the user. When Steps 2 and 3 are complete, the browser will be redirected to this webhook (because that's the URL contained in `OAUTH2_CALLBACK`) with an added `code` parameter. Add the following code inside the empty `else` block you added above, to handle the case where a `code` parameter is provided: ``` javascript // We have a code, so we've redirected successfully from Google's consent page. // Let's post to Google, requesting an access: let res = await context.http.post({ url: GOOGLE_TOKEN_ENDPOINT, body: { client_id: CLIENT_ID, client_secret: CLIENT_SECRET, code: oauthCode, grant_type: 'authorization_code', redirect_uri: OAUTH2_CALLBACK, }, encodeBodyAsJSON: true, }); let tokens = JSON.parse(res.body.text()); if (typeof tokens.expires_in === "undefined") { throw new Error("Error response from Google: " + JSON.stringify(tokens)) } if (typeof tokens.refresh_token === "undefined") { return { "message": `You appear to have already linked to Google. You may need to revoke your OAuth token (${tokens.access_token}) and delete your auth token document. https://developers.google.com/identity/protocols/oauth2/web-server#tokenrevoke` }; } tokens._id = "youtube"; tokens.updated = new Date(); tokens.expires_at = new Date(); tokens.expires_at.setTime(Date.now() + (tokens.expires_in \* 1000)); const tokens_collection = context.services.get("mongodb-atlas").db("auth").collection("auth_tokens"); if (await tokens_collection.findOne({ \_id: "youtube" })) { await tokens_collection.updateOne( { \_id: "youtube" }, { '$set': tokens } ); } else { await tokens_collection.insertOne(tokens); } return {"message": "ok"}; ``` There's quite a lot of code here to implement Step 5, but it's not too complicated. It makes a request to the Google token endpoint, providing the code from the URL, to obtain both an access token and a refresh token for when the access token expires (which it does after an hour). It then checks for errors, modifies the JavaScript object a little to make it suitable for storing in MongoDB, and then it saves it to the `tokens_collection`. You can find all the code for this webhook function on GitHub. ## Authorizing the Realm App Go to the webhook's "Settings" tab, copy the webhook's URL, and paste it into a new browser tab. You should see the following scary warning page! This is because the app has not been checked out by Google, which would be the case if it was fully published. You can ignore it for now—it's safe because it's *your* app. Click "Continue" to progress to the consent page. The consent page should look something like the screenshot below. Click "Allow" and you should be presented with a very plain page that says `{"status": "okay" }`, which means that you've completed all of the authorization steps! If you load up the `auth_tokens` collection in MongoDB Atlas, you should see that it contains a single document containing the access and refresh tokens provided by Google. ## Using the Tokens to Make a Call To make a test call, create a new HTTP service webhook, and paste in the following code: ``` javascript exports = async function(payload, response) { const querystring = require('querystring'); // START OF TEMPORARY BLOCK ----------------------------- // Get the current token: const tokens_collection = context.services.get("mongodb-atlas").db("auth").collection("auth_tokens"); const tokens = await tokens_collection.findOne({_id: "youtube"}); // If this code is executed one hour after authorization, the token will be invalid: const accessToken = tokens.access_token; // END OF TEMPORARY BLOCK ------------------------------- // Get the channels owned by this user: const url = new URL("https://www.googleapis.com/youtube/v3/playlists"); url.search = querystring.stringify({ "mine": "true", "part": "snippet,id", }); // Make an authenticated call: const result = await context.http.get({ url: url.href, headers: { 'Authorization': `Bearer ${accessToken}`], 'Accept': ['application/json'], }, }); response.setHeader('Content-Type', 'text/plain'); response.setBody(result.body.text()); }; ``` The summary of this code is that it looks up an access token in the `auth_tokens` collection, and then makes an authenticated request to YouTube's `playlists` endpoint. Authentication is proven by providing the access token as a [bearer token in the 'Authorization' header. Test out this function by calling the webhook in a browser tab. It should display some JSON, listing details about your YouTube playlists. The problem with this code is that if you run it over an hour after authorizing with YouTube, then the access token will have expired, and you'll get an error message! To account for this, I created a function called `get_token`, which will refresh the access token if it's expired. ## Token Refreshing The `get_token` function is a standard MongoDB Realm serverless function, *not* a webhook. Click "Functions" on the left side of the page in MongoDB Realm, click "Create New Function," and name your function "get_token." In the function editor, paste in the following code: ``` javascript exports = async function(){ const GOOGLE_TOKEN_ENDPOINT = "https://oauth2.googleapis.com/token"; const CLIENT_ID = context.values.get("GOOGLE_CLIENT_ID"); const CLIENT_SECRET = context.values.get("GOOGLE_CLIENT_SECRET"); const tokens_collection = context.services.get("mongodb-atlas").db("auth").collection("auth_tokens"); // Look up tokens: let tokens = await tokens_collection.findOne({_id: "youtube"}); if (new Date() >= tokens.expires_at) { // access_token has expired. Get a new one. let res = await context.http.post({ url: GOOGLE_TOKEN_ENDPOINT, body: { client_id: CLIENT_ID, client_secret: CLIENT_SECRET, grant_type: 'refresh_token', refresh_token: tokens.refresh_token, }, encodeBodyAsJSON: true, }); tokens = JSON.parse(res.body.text()); tokens.updated = new Date(); tokens.expires_at = new Date(); tokens.expires_at.setTime(Date.now() + (tokens.expires_in \* 1000)); await tokens_collection.updateOne( { \_id: "youtube" }, { $set: { access_token: tokens.access_token, expires_at: tokens.expires_at, expires_in: tokens.expires_in, updated: tokens.updated, }, }, ); } return tokens.access_token }; ``` The start of this function does the same thing as the temporary block in the webhook—it looks up the currently stored access token in MongoDB Atlas. It then checks to see if the token has expired, and if it has, it makes a call to Google with the `refresh_token`, requesting a new access token, which it then uses to update the MongoDB document. Save this function and then return to your test webhook. You can replace the code between the TEMPORARY BLOCK comments with the following line of code: ``` javascript // Get a token (it'll be refreshed if necessary): const accessToken = await context.functions.execute("get_token"); ``` From now on, this should be all you need to do to make an authorized request against the Google API—obtain the access token with `get_token` and add it to your HTTP request as a bearer token in the `Authorization` header. ## Conclusion I hope you found this useful! The OAuth 2 protocol can seem a little overwhelming, and the incompatibility of various client libraries, such as Google's, with MongoDB Realm can make life a bit more difficult, but this post should demonstrate how, with a webhook and a utility function, much of OAuth's complexity can be hidden away in a well designed MongoDB app. > > >If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. > >
md
{ "tags": [ "Realm", "JavaScript", "Serverless" ], "pageDescription": "Authenticate with OAuth2 and MongoDB Realm Functions", "contentType": "Tutorial" }
OAuth & MongoDB Realm Serverless Functions
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/csharp/use-azure-key-vault-mongodb-client-side-field-level-encryption
created
# Integrate Azure Key Vault with MongoDB Client-Side Field Level Encryption When implementing MongoDB’s client-side field level encryption (CSFLE), you’ll find yourself making an important decision: Where do I store my customer master key? In another tutorial, I guided readers through the basics of CSFLE by using a locally-generated and stored master key. While this works for educational and local development purposes, it isn’t suitable for production! In this tutorial, we’ll see how to use Azure Key Vault to generate and securely store our master key. ## Prerequisites * A MongoDB Atlas cluster running MongoDB 4.2 (or later) OR MongoDB 4.2 Enterprise Server (or later)—required for automatic encryption * MongoDB .NET Driver 2.13.0 (or later) * Mongocryptd * An Azure Account with an active subscription and the same permissions as those found in any of these Azure AD roles (only one is needed): * Application administrator * Application developer * Cloud application administrator * An Azure AD tenant (you can use an existing one, assuming you have appropriate permissions) * Azure CLI * Cloned sample application ## Quick Jump **Prepare Azure Infrastructure** * Register App in Azure Active Directory * Create a Client Secret * Create an Azure Key Vault * Create and Add a Key to your Key Vault * Grant Application Permissions to Key Vault **Configure your Client Application to use Azure Key Vault and CSFLE** * Integrate Azure Key Vault into Your Client Application * The Results - What You Get After Integrating Azure Key Vault with MongoDB CSFLE - - - ## Register App in Azure Active Directory In order to establish a trust relationship between our application and the Microsoft identity platform, we first need to register it.  1. Sign in to the Azure portal. 2. If you have access to multiple tenants, in the top menu, use the “Directory + subscription filter” to select the tenant in which you want to register an application. 3. In the main search bar, search for and select “Azure Active Directory.” 4. On the left-hand navigation menu, find the Manage section and select “App registrations,” then “+ New registration.” 5. Enter a display name for your application. You can change the display name at any time and multiple app registrations can share the same name. The app registration's automatically generated Application (client) ID, not its display name, uniquely identifies your app within the identity platform. 6. Specify who can use the application, sometimes called its sign-in audience. For this tutorial, I’ve selected “Accounts in this organizational directory only (Default Directory only - Single tenant).” This only allows users that are in my current tenant access to my application. 7. Click “Register.” Once the initial app registration is complete, copy the **Directory (tenant) ID** and **Application (client) ID** as we’ll need them later on. 8. Find the linked application under “Managed application in local directory” and click on it. 9. Once brought to the “Properties” page, also copy the “**Object ID**” as we’ll need this too. ## Create a Client Secret Once your application is registered, we’ll need to create a client secret for it. This will be required when authenticating to the Key Vault we’ll be creating soon. 1. On the overview of your newly registered application, click on “Add a certificate or secret”: 2. Under “Client secrets,” click “+ New client secret.” 3. Enter a short description for this client secret and leave the default “Expires” setting of 6 months. 4. Click “Ad." Once the client secret is created, be sure to copy the secret’s “**Value**” as we’ll need it later. It’s also worth mentioning that once you leave this page, the secret value is never displayed again, so be sure to record it at least once! ## Create an Azure Key Vault Next up, an Azure Key Vault! We’ll create one so we can securely store our customer master key. We’ll be completing these steps via the Azure CLI, so open up your favorite terminal and follow along: 1. Sign in to the Azure CLI using the `az login` command. Finish the authentication steps by following the steps displayed in your terminal. 2. Create a resource group:  ``` bash az group create --name "YOUR-RESOURCE-GROUP-NAME" --location ``` 3. Create a key vault:  ``` bash az keyvault create --name "YOUR-KEYVAULT-NAME" --resource-group "YOUR-RESOURCE-GROUP-NAME" --location ``` ## Create and Add a Key to Your Key Vault With a key vault, we can now create our customer master key! This will be stored, managed, and secured by Azure Key Vault. Create a key and add to our key vault: ``` bash az keyvault key create --vault-name "YOUR-KEYVAULT-NAME" --name "YOUR-KEY-NAME" --protection software ``` The `--protection` parameter designates the key protection type. For now, we'll use the `software` type. Once the command completes, take note of your key’s "**name**" as we‘ll need it later! ## Grant Application Permissions to Key Vault To enable our client application access to our key vault, some permissions need to be granted: 1. Give your application the wrapKey and unwrapKey permissions to the keyvault. (For the `--object-id` parameter, paste in the Object ID of the application we registered earlier. This is the Object ID we copied in the last "Register App in Azure Active Directory" step.) ``` bash az keyvault set-policy --name "YOUR-KEYVAULT-NAME" --key-permissions wrapKey unwrapKey --object-id ``` 2. Upon success, you’ll receive a JSON object. Find and copy the value for the “**vaultUri**” key. For example, mine is `https://csfle-mdb-demo-vault.vault.azure.net`. ## Integrate Azure Key Vault into Your Client Application Now that our cloud infrastructure is configured, we can start integrating it into our application. We’ll be referencing the sample repo from our prerequisites for these steps, but feel free to use the portions you need in an existing application. 1. If you haven’t cloned the repo yet, do so now!  ``` shell git clone https://github.com/adriennetacke/mongodb-csfle-csharp-demo-azure.git ``` 2. Navigate to the root directory `mongodb-csfle-csharp-demo-azure` and open the `EnvoyMedSys` sample application in Visual Studio. 3. In the Solution Explorer, find and open the `launchSettings.json` file (`Properties` > `launchSettings.json`). 4. Here, you’ll see some scaffolding for some variables. Let’s quickly go over what those are: * `MDB_ATLAS_URI`: The connection string to your MongoDB Atlas cluster. This enables us to store our data encryption key, encrypted by Azure Key Vault. * `AZURE_TENANT_ID`: Identifies the organization of the Azure account. * `AZURE_CLIENT_ID`: Identifies the `clientId` to authenticate your registered application. * `AZURE_CLIENT_SECRET`: Used to authenticate your registered application. * `AZURE_KEY_NAME`: Name of the Customer Master Key stored in Azure Key Vault. * `AZURE_KEYVAULT_ENDPOINT`: URL of the Key Vault. E.g., `yourVaultName.vault.azure.net`. 5. Replace all of the placeholders in the `launchSettings.json` file with **your own information**. Each variable corresponds to a value you were asked to copy and keep track of: * `MDB_ATLAS_URI`: Your **Atlas URI****.** * `AZURE_TENANT_ID`: **Directory (tenant) ID**. * `AZURE_CLIENT_ID`: **Application (client) ID.** * `AZURE_CLIENT_SECRET`: Secret **Value** from our client secret. * `AZURE_KEY_NAME`: Key **Name**. * `AZURE_KEYVAULT_ENDPOINT`: Our Key Vault’s **vaultUri**. 6. Save all your files! Before we run the application, let’s go over what’s happening: When we run our main program, we set the connection to our Atlas cluster and our key vault’s collection namespace. We then instantiate two helper classes: a `KmsKeyHelper` and an `AutoEncryptHelper`. The `KmsKeyHelper`’s `CreateKeyWithAzureKmsProvider()` method is called to generate our encrypted data encryption key. This is then passed to the `AutoEncryptHelper`’s `EncryptedWriteAndReadAsync()` method to insert a sample document with encrypted fields and properly decrypt it when we need to fetch it. This is all in our `Program.cs` file: `Program.cs` ``` cs using System; using MongoDB.Driver; namespace EnvoyMedSys { public enum KmsKeyLocation { Azure, } class Program { public static void Main(string] args) { var connectionString = Environment.GetEnvironmentVariable("MDB_ATLAS_URI"); var keyVaultNamespace = CollectionNamespace.FromFullName("encryption.__keyVaultTemp"); var kmsKeyHelper = new KmsKeyHelper( connectionString: connectionString, keyVaultNamespace: keyVaultNamespace); var autoEncryptHelper = new AutoEncryptHelper( connectionString: connectionString, keyVaultNamespace: keyVaultNamespace); var kmsKeyIdBase64 = kmsKeyHelper.CreateKeyWithAzureKmsProvider().GetAwaiter().GetResult(); autoEncryptHelper.EncryptedWriteAndReadAsync(kmsKeyIdBase64, KmsKeyLocation.Azure).GetAwaiter().GetResult(); Console.ReadKey(); } } } ``` Taking a look at the `KmsKeyHelper` class, there are a few important methods: the `CreateKeyWithAzureKmsProvider()` and `GetClientEncryption()` methods. I’ve opted to include comments in the code to make it easier to follow along: `KmsKeyHelper.cs` / `CreateKeyWithAzureKmsProvider()` ``` cs public async Task CreateKeyWithAzureKmsProvider() { var kmsProviders = new Dictionary>(); // Pull Azure Key Vault settings from environment variables var azureTenantId = Environment.GetEnvironmentVariable("AZURE_TENANT_ID"); var azureClientId = Environment.GetEnvironmentVariable("AZURE_CLIENT_ID"); var azureClientSecret = Environment.GetEnvironmentVariable("AZURE_CLIENT_SECRET"); var azureIdentityPlatformEndpoint = Environment.GetEnvironmentVariable("AZURE_IDENTIFY_PLATFORM_ENPDOINT"); // Optional, only needed if user is using a non-commercial Azure instance // Configure our registered application settings var azureKmsOptions = new Dictionary { { "tenantId", azureTenantId }, { "clientId", azureClientId }, { "clientSecret", azureClientSecret }, }; if (azureIdentityPlatformEndpoint != null) { azureKmsOptions.Add("identityPlatformEndpoint", azureIdentityPlatformEndpoint); } // Specify remote key location; in this case, Azure kmsProviders.Add("azure", azureKmsOptions); // Constructs our client encryption settings which // specify which key vault client, key vault namespace, // and KMS providers to use. var clientEncryption = GetClientEncryption(kmsProviders); // Set KMS Provider Settings // Client uses these settings to discover the master key var azureKeyName = Environment.GetEnvironmentVariable("AZURE_KEY_NAME"); var azureKeyVaultEndpoint = Environment.GetEnvironmentVariable("AZURE_KEYVAULT_ENDPOINT"); // typically .vault.azure.net var azureKeyVersion = Environment.GetEnvironmentVariable("AZURE_KEY_VERSION"); // Optional var dataKeyOptions = new DataKeyOptions( masterKey: new BsonDocument { { "keyName", azureKeyName }, { "keyVaultEndpoint", azureKeyVaultEndpoint }, { "keyVersion", () => azureKeyVersion, azureKeyVersion != null } }); // Create Data Encryption Key var dataKeyId = clientEncryption.CreateDataKey("azure", dataKeyOptions, CancellationToken.None); Console.WriteLine($"Azure DataKeyId [UUID]: {dataKeyId}"); var dataKeyIdBase64 = Convert.ToBase64String(GuidConverter.ToBytes(dataKeyId, GuidRepresentation.Standard)); Console.WriteLine($"Azure DataKeyId [base64]: {dataKeyIdBase64}"); // Optional validation; checks that key was created successfully await ValidateKeyAsync(dataKeyId); return dataKeyIdBase64; } ``` `KmsKeyHelper.cs` / `GetClientEncryption()` ``` cs private ClientEncryption GetClientEncryption( Dictionary> kmsProviders) { // Construct a MongoClient using our Atlas connection string var keyVaultClient = new MongoClient(_mdbConnectionString); // Set MongoClient, key vault namespace, and Azure as KMS provider var clientEncryptionOptions = new ClientEncryptionOptions( keyVaultClient: keyVaultClient, keyVaultNamespace: _keyVaultNamespace, kmsProviders: kmsProviders); return new ClientEncryption(clientEncryptionOptions); } ``` With our Azure Key Vault connected and data encryption key encrypted, we’re ready to insert some data into our Atlas cluster! This is where the `AutoEncryptHelper` class comes in. The important method to note here is the `EncryptedReadAndWrite()` method: `AutoEncryptHelper.cs` / `EncryptedReadAndWrite()` ``` cs public async Task EncryptedWriteAndReadAsync(string keyIdBase64, KmsKeyLocation kmsKeyLocation) { // Construct a JSON Schema var schema = JsonSchemaCreator.CreateJsonSchema(keyIdBase64); // Construct an auto-encrypting client var autoEncryptingClient = CreateAutoEncryptingClient( kmsKeyLocation, _keyVaultNamespace, schema); // Set our working database and collection to medicalRecords.patientData var collection = autoEncryptingClient .GetDatabase(_medicalRecordsNamespace.DatabaseNamespace.DatabaseName) .GetCollection(_medicalRecordsNamespace.CollectionName); var ssnQuery = Builders.Filter.Eq("ssn", __sampleSsnValue); // Upsert (update if found, otherwise create it) a document into the collection var medicalRecordUpdateResult = await collection .UpdateOneAsync(ssnQuery, new BsonDocument("$set", __sampleDocFields), new UpdateOptions() { IsUpsert = true }); if (!medicalRecordUpdateResult.UpsertedId.IsBsonNull) { Console.WriteLine("Successfully upserted the sample document!"); } // Query by SSN field with auto-encrypting client var result = await collection.Find(ssnQuery).SingleAsync(); // Proper result in console should show decrypted, human-readable document Console.WriteLine($"Encrypted client query by the SSN (deterministically-encrypted) field:\n {result}\n"); } ``` Now that we know what’s going on, run your application! ## The Results: What You Get After Integrating Azure Key Vault with MongoDB CSFLE If all goes well, your console will print out two `DataKeyIds` (UUID and base64) and a document that resembles the following:  `Sample Result Document (using my information)` ``` bash { _id:UUID('ab382f3e-bc79-4086-8418-836a877efff3'), keyMaterial:Binary('tvehP03XhUsztKr69lxlaGjiPhsNPjy6xLhNOLTpe4pYMeGjMIwvvZkzrwLRCHdaB3vqi9KKe6/P5xvjwlVHacQ1z9oFIwFbp9nk...', 0), creationDate:2021-08-24T05:01:34.369+00:00, updateDate:2021-08-24T05:01:34.369+00:00, status:0, masterKey:Object, provider:"azure", keyVaultEndpoint:"csfle-mdb-demo-vault.vault.azure.net", keyName:"MainKey" } ``` Here’s what my console output looks like, for reference: ![Screenshot of console output showing two Azure DatakeyIds and a non-formatted document Seeing this is great news! A lot of things have just happened, and all of them are good: * Our application properly authenticated to our Azure Key Vault. * A properly generated data encryption key was created by our client application. * The data encryption key was properly encrypted by our customer master key that’s securely stored in Azure Key Vault. * The encrypted data encryption key was returned to our application and stored in our MongoDB Atlas cluster. Here’s the same process in a workflow: After a few more moments, and upon success, you’ll see a “Successfully upserted the sample document!” message, followed by the properly decrypted results of a test query. Again, here’s my console output for reference: This means our sample document was properly encrypted, inserted into our `patientData` collection, queried with our auto-encrypting client by SSN, and had all relevant fields correctly decrypted before returning them to our console. How neat!  And just because I’m a little paranoid, we can double-check that our data has actually been encrypted. If you log into your Atlas cluster and navigate to the `patientData` collection, you’ll see that our documents’ sensitive fields are all illegible: ## Let's Summarize That wasn’t so bad, right? Let's see what we've accomplished! This tutorial walked you through: * Registering an App in Azure Active Directory. * Creating a Client Secret. * Creating an Azure Key Vault. * Creating and Adding a Key to your Key Vault. * Granting Application Permissions to Key Vault. * Integrating Azure Key Vault into Your Client Application. * The Results: What You Get After Integrating Azure Key Vault with MongoDB CSFLE. By using a remote key management system like Azure Key Vault, you gain access to many benefits over using a local filesystem. The most important of these is the secure storage of the key, reduced risk of access permission issues, and easier portability! For more information, check out this helpful list of resources I used while preparing this tutorial: * az keyvault command list * Registering an application with the Microsoft Identity Platform * MongoDB CSFLE and Azure Key Vault And if you have any questions or need some additional help, be sure to check us out on the MongoDB Community Forums and start a topic! A whole community of MongoDB engineers (including the DevRel team) and fellow developers are sure to help!
md
{ "tags": [ "C#", "MongoDB", "Azure" ], "pageDescription": "Learn how to use Azure Key Vault as your remote key management system with MongoDB's client-side field level encryption, step-by-step.", "contentType": "Tutorial" }
Integrate Azure Key Vault with MongoDB Client-Side Field Level Encryption
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/code-examples/csharp/client-side-field-level-encryption-mongodb-csharp
created
# How to Use MongoDB Client-Side Field Level Encryption (CSFLE) with C# Client-side field level encryption (CSFLE) provides an additional layer of security to your most sensitive data. Using a supported MongoDB driver, CSFLE encrypts certain fields that you specify, ensuring they are never transmitted unencrypted, nor seen unencrypted by the MongoDB server. This may be the only time I use a Transformers GIF. Encryption GIFs are hard to find! This also means that it's nearly impossible to obtain sensitive information from the database server. Without access to a specific key, data cannot be decrypted and exposed, rendering the intercepting data from the client fruitless. Reading data directly from disk, even with DBA or root credentials, will also be impossible as the data is stored in an encrypted state. Key applications that showcase the power of client-side field level encryption are those in the medical field. If you quickly think back to the last time you visited a clinic, you already have an effective use case for an application that requires a mix of encrypted and non-encrypted fields. When you check into a clinic, the person may need to search for you by name or insurance provider. These are common data fields that are usually left non-encrypted. Then, there are more obvious pieces of information that require encryption: things like a Social Security number, medical records, or your insurance policy number. For these data fields, encryption is necessary. This tutorial will walk you through setting up a similar medical system that uses automatic client-side field level encryption in the MongoDB .NET Driver (for explicit, meaning manual, client-side field level encryption, check out these docs). In it, you'll: **Prepare a .NET Core console application** * Create a .NET Core Console Application * Install CSFLE Dependencies **Generate secure, random keys needed for CSFLE** * Create a Local Master Key * Create a Data Encryption Key **Configure CSFLE on the MongoClient** * Specify Encrypted Fields Using a JSON Schema * Create the CSFLE-Enabled MongoDB Client **See CSFLE in action** * Perform Encrypted Read/Write Operations * Bonus: What's the Difference with a Non-Encrypted Client? > 💡️ This can be an intimidating tutorial, so don't hesitate to take as many breaks as you need; in fact, complete the steps over a few days! I've tried my best to ensure each step completed acts as a natural save point for the duration of the entire tutorial. :) Let's do this step by step! ## Prerequisites * A MongoDB Atlas cluster running MongoDB 4.2 (or later) OR MongoDB 4.2 Enterprise Server (or later)—required for automatic encryption * MongoDB .NET Driver 2.12.0-beta (or later) * Mongocryptd * File system permissions (to start the mongocryptd process, if running locally) > 💻 The code for this tutorial is available in this repo. ## Create a .NET Core Console Application Let's start by scaffolding our console application. Open Visual Studio (I'm using Visual Studio 2019 Community Edition) and create a new project. When selecting a template, choose the "Console App (.NET Core)" option and follow the prompts to name your project. Visual Studio 2019 create a new project prompt; Console App (.NET Core) option is highlighted. ## Install CSFLE Dependencies Once the project template loads, we'll need to install one of our dependencies. In your Package Manager Console, use the following command to install the MongoDB Driver: ```bash Install-Package MongoDB.Driver -Version 2.12.0-beta1 ``` > 💡️ If your Package Manager Console is not visible in your IDE, you can get to it via *View > Other Windows > Package Manager Console* in the File Menu. The next dependency you'll need to install is mongocryptd, which is an application that is provided as part of MongoDB Enterprise and is needed for automatic field level encryption. Follow the instructions to install mongocryptd on your machine. In a production environment, it's recommended to run mongocryptd as a service at startup on your VM or container. Now that our base project and dependencies are set, we can move onto creating and configuring our different encryption keys. MongoDB client-side field level encryption uses an encryption strategy called envelope encryption. This strategy uses two different kinds of keys. The first key is called a **data encryption key**, which is used to encrypt/decrypt the data you'll be storing in MongoDB. The other key is called a **master key** and is used to encrypt the data encryption key. This is the top-level plaintext key that will always be required and is the key we are going to generate in the next step. > 🚨️ Before we proceed, it's important to note that this tutorial will > demonstrate the generation of a master key file stored as plaintext in > the root of our application. This is okay for **development** and > educational purposes, such as this tutorial. However, this should > **NOT** be done in a **production** environment! > > Why? In this scenario, anyone that obtains a copy of the disk or a VM > snapshot of the app server hosting our application would also have > access to this key file, making it possible to access the application's > data. > > Instead, you should configure a master key in a Key Management > System > such as Azure Key Vault or AWS KMS for production. > > Keep this in mind and watch for another post that shows how to implement > CSFLE with Azure Key Vault! ## Create a Local Master Key In this step, we generate a 96-byte, locally-managed master key. Then, we save it to a local file called `master-key.txt`. We'll be doing a few more things with keys, so create a separate class called `KmsKeyHelper.cs`. Then, add the following code to it: ``` csp // KmsKeyHelper.cs using System; using System.IO; namespace EnvoyMedSys { public class KmsKeyHelper { private readonly static string __localMasterKeyPath = "../../../master-key.txt"; public void GenerateLocalMasterKey() { using (var randomNumberGenerator = System.Security.Cryptography.RandomNumberGenerator.Create()) { var bytes = new byte96]; randomNumberGenerator.GetBytes(bytes); var localMasterKeyBase64 = Convert.ToBase64String(bytes); Console.WriteLine(localMasterKeyBase64); File.WriteAllText(__localMasterKeyPath, localMasterKeyBase64); } } } } ``` So, what's happening here? Let's break it down, line by line: First, we declare and set a private variable called `__localMasterKeyPath`. This holds the path to where we save our master key. Next, we create a `GenerateLocalMasterKey()` method. In this method, we use .NET's [Cryptography services to create an instance of a `RandomNumberGenerator`. Using this `RandomNumberGenerator`, we generate a cryptographically strong, 96-byte key. After converting it to a Base64 representation, we save the key to the `master-key.txt` file. Great! We now have a way to generate a local master key. Let's modify the main program to use it. In the `Program.cs` file, add the following code: ``` csp // Program.cs using System; using System.IO; namespace EnvoyMedSys { class Program { public static void Main() { var kmsKeyHelper = new KmsKeyHelper(); // Ensure GenerateLocalMasterKey() only runs once! if (!File.Exists("../../../master-key.txt")) { kmsKeyHelper.GenerateLocalMasterKey(); } Console.ReadKey(); } } } ``` In the `Main` method, we create an instance of our `KmsKeyHelper`, then call our `GenerateLocalMasterKey()` method. Pretty straightforward! Save all files, then run your program. If all is successful, you'll see a console pop up and the Base64 representation of your newly generated master key printed in the console. You'll also see a new `master-key.txt` file appear in your solution explorer. Now that we have a master key, we can move onto creating a data encryption key. ## Create a Data Encryption Key The next key we need to generate is a data encryption key. This is the key the MongoDB driver stores in a key vault collection, and it's used for automatic encryption and decryption. Automatic encryption requires MongoDB Enterprise 4.2 or a MongoDB 4.2 Atlas cluster. However, automatic *decryption* is supported for all users. See how to configure automatic decryption without automatic encryption. Let's add a few more lines of code to the `Program.cs` file: ``` csp using System; using System.IO; using MongoDB.Driver; namespace EnvoyMedSys { class Program { public static void Main() { var connectionString = Environment.GetEnvironmentVariable("MDB_URI"); var keyVaultNamespace = CollectionNamespace.FromFullName("encryption.__keyVault"); var kmsKeyHelper = new KmsKeyHelper( connectionString: connectionString, keyVaultNamespace: keyVaultNamespace); string kmsKeyIdBase64; // Ensure GenerateLocalMasterKey() only runs once! if (!File.Exists("../../../master-key.txt")) { kmsKeyHelper.GenerateLocalMasterKey(); } kmsKeyIdBase64 = kmsKeyHelper.CreateKeyWithLocalKmsProvider(); Console.ReadKey(); } } } ``` So, what's changed? First, we added an additional import (`MongoDB.Driver`). Next, we declared a `connectionString` and a `keyVaultNamespace` variable. For the key vault namespace, MongoDB will automatically create the database `encryption` and collection `__keyVault` if it does not currently exist. Both the database and collection names were purely my preference. You can choose to name them something else if you'd like! Next, we modified the `KmsKeyHelper` instantiation to accept two parameters: the connection string and key vault namespace we previously declared. Don't worry, we'll be changing our `KmsKeyHelper.cs` file to match this soon. Finally, we declare a `kmsKeyIdBase64` variable and set it to a new method we'll create soon: `CreateKeyWithLocalKmsProvider();`. This will hold our data encryption key. ### Securely Setting the MongoDB connection In our code, we set our MongoDB URI by pulling from environment variables. This is far safer than pasting a connection string directly into our code and is scalable in a variety of automated deployment scenarios. For our purposes, we'll create a `launchSettings.json` file. > 💡️ Don't commit the `launchSettings.json` file to a public repo! In > fact, add it to your `.gitignore` file now, if you have one or plan to > share this application. Otherwise, you'll expose your MongoDB URI to the > world! Right-click on your project and select "Properties" in the context menu. The project properties will open to the "Debug" section. In the "Environment variables:" area, add a variable called `MDB_URI`, followed by the connection URI: Adding an environment variable to the project settings in Visual Studio 2019. What value do you set to your `MDB_URI` environment variable? * MongoDB Atlas: If using a MongoDB Atlas cluster, paste in your Atlas URI. * Local: If running a local MongoDB instance and haven't changed any default settings, you can use the default connection string: `mongodb://localhost:27017`. Once your `MDB_URI` is added, save the project properties. You'll see that a `launchSettings.json` file will be automatically generated for you! Now, any `Environment.GetEnvironmentVariable()` calls will pull from this file. With these changes, we now have to modify and add a few more methods to the `KmsKeyHelper` class. Let's do that now. First, add these additional imports: ``` csp // KmsKeyHelper.cs using System.Collections.Generic; using System.Threading; using MongoDB.Bson; using MongoDB.Driver; using MongoDB.Driver.Encryption; ``` Next, declare two private variables and create a constructor that accepts both a connection string and key vault namespace. We'll need this information to create our data encryption key; this also makes it easier to extend and integrate with a remote KMS later on. ``` csp // KmsKeyhelper.cs private readonly string _mdbConnectionString; private readonly CollectionNamespace _keyVaultNamespace; public KmsKeyHelper( string connectionString, CollectionNamespace keyVaultNamespace) { _mdbConnectionString = connectionString; _keyVaultNamespace = keyVaultNamespace; } ``` After the GenerateLocalMasterKey() method, add the following new methods. Don't worry, we'll go over each one: ``` csp // KmsKeyHelper.cs public string CreateKeyWithLocalKmsProvider() { // Read Master Key from file & convert string localMasterKeyBase64 = File.ReadAllText(__localMasterKeyPath); var localMasterKeyBytes = Convert.FromBase64String(localMasterKeyBase64); // Set KMS Provider Settings // Client uses these settings to discover the master key var kmsProviders = new Dictionary>(); var localOptions = new Dictionary { { "key", localMasterKeyBytes } }; kmsProviders.Add("local", localOptions); // Create Data Encryption Key var clientEncryption = GetClientEncryption(kmsProviders); var dataKeyid = clientEncryption.CreateDataKey("local", new DataKeyOptions(), CancellationToken.None); clientEncryption.Dispose(); Console.WriteLine($"Local DataKeyId UUID]: {dataKeyid}"); var dataKeyIdBase64 = Convert.ToBase64String(GuidConverter.ToBytes(dataKeyid, GuidRepresentation.Standard)); Console.WriteLine($"Local DataKeyId [base64]: {dataKeyIdBase64}"); // Optional validation; checks that key was created successfully ValidateKey(dataKeyid); return dataKeyIdBase64; } ``` This method is the one we call from the main program. It's here that we generate our data encryption key. Lines 6-7 read the local master key from our `master-key.txt` file and convert it to a byte array. Lines 11-16 set the KMS provider settings the client needs in order to discover the master key. As you can see, we add the local provider and the matching local master key we've just retrieved. With these KMS provider settings, we construct additional client encryption settings. We do this in a separate method called `GetClientEncryption()`. Once created, we finally generate an encrypted key. As an extra measure, we call a third new method `ValidateKey()`, just to make sure the data encryption key was created. After these steps, and if successful, the `CreateKeyWithLocalKmsProvider()` method returns our data key id encoded in Base64 format. After the CreateKeyWithLocalKmsProvider() method, add the following method: ``` csp // KmsKeyHelper.cs private ClientEncryption GetClientEncryption( Dictionary> kmsProviders) { var keyVaultClient = new MongoClient(_mdbConnectionString); var clientEncryptionOptions = new ClientEncryptionOptions( keyVaultClient: keyVaultClient, keyVaultNamespace: _keyVaultNamespace, kmsProviders: kmsProviders); return new ClientEncryption(clientEncryptionOptions); } ``` Within the `CreateKeyWithLocalKmsProvider()` method, we call `GetClientEncryption()` (the method we just added) to construct our client encryption settings. These include which key vault client, key vault namespace, and KMS providers to use. In this method, we construct a MongoClient using the connection string, then set it as a key vault client. We also use the key vault namespace that was passed in and the local KMS providers we previously constructed. These client encryption options are then returned. Last but not least, after GetClientEncryption(), add the final method: ``` csp // KmsKeyHelper.cs private void ValidateKey(Guid dataKeyId) { var client = new MongoClient(_mdbConnectionString); var collection = client .GetDatabase(_keyVaultNamespace.DatabaseNamespace.DatabaseName) #pragma warning disable CS0618 // Type or member is obsolete .GetCollection(_keyVaultNamespace.CollectionName, new MongoCollectionSettings { GuidRepresentation = GuidRepresentation.Standard }); #pragma warning restore CS0618 // Type or member is obsolete var query = Builders.Filter.Eq("_id", new BsonBinaryData(dataKeyId, GuidRepresentation.Standard)); var keyDocument = collection .Find(query) .Single(); Console.WriteLine(keyDocument); } ``` Though optional, this method conveniently checks that the data encryption key was created correctly. It does this by constructing a MongoClient using the specified connection string, then queries the database for the data encryption key. If it was successfully created, the data encryption key would have been inserted as a document into your replica set and will be retrieved in the query. With these changes, we're ready to generate our data encryption key. Make sure to save all files, then run your program. If all goes well, your console will print out two DataKeyIds (UUID and base64) as well as a document that resembles the following: ``` json { "_id" : CSUUID("aae4f3b4-91b6-4cef-8867-3113a6dfb27b"), "keyMaterial" : Binary(0, "rcfTQLRxF1mg98/Jr7iFwXWshvAVIQY6JCswrW+4bSqvLwa8bQrc65w7+3P3k+TqFS+1Ce6FW4Epf5o/eqDyT//I73IRc+yPUoZew7TB1pyIKmxL6ABPXJDkUhvGMiwwkRABzZcU9NNpFfH+HhIXjs324FuLzylIhAmJA/gvXcuz6QSD2vFpSVTRBpNu1sq0C9eZBSBaOxxotMZAcRuqMA=="), "creationDate" : ISODate("2020-11-08T17:58:36.372Z"), "updateDate" : ISODate("2020-11-08T17:58:36.372Z"), "status" : 0, "masterKey" : { "provider" : "local" } } ``` For reference, here's what my console output looks like: Console output showing two data key ids and a data object; these are successful signs of a properly generated data encryption key. If you want to be extra sure, you can also check your cluster to see that your data encryption key is stored as a document in the newly created encryption database and \_\_keyVault collection. Since I'm connecting with my Atlas cluster, here's what it looks like there: Saved data encryption key in MongoDB Atlas Sweet! Now that we have generated a data encryption key, which has been encrypted itself with our local master key, the next step is to specify which fields in our application should be encrypted. ## Specify Encrypted Fields Using a JSON Schema In order for automatic client-side encryption and decryption to work, a JSON schema needs to be defined that specifies which fields to encrypt, which encryption algorithms to use, and the BSON Type of each field. Using our medical application as an example, let's plan on encrypting the following fields: ##### Fields to encrypt | Field name | Encryption algorithms | BSON Type | | ---------- | --------------------- | --------- | | SSN (Social Security Number) | [Deterministic | `Int` | | Blood Type | Random | `String` | | Medical Records | Random | `Array` | | Insurance: Policy Number | Deterministic | `Int` (embedded inside insurance object) | To make this a bit easier, and to separate this functionality from the rest of the application, create another class named `JsonSchemaCreator.cs`. In it, add the following code: ``` csp // JsonSchemaCreator.cs using MongoDB.Bson; using System; namespace EnvoyMedSys { public static class JsonSchemaCreator { private static readonly string DETERMINISTIC_ENCRYPTION_TYPE = "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic"; private static readonly string RANDOM_ENCRYPTION_TYPE = "AEAD_AES_256_CBC_HMAC_SHA_512-Random"; private static BsonDocument CreateEncryptMetadata(string keyIdBase64) { var keyId = new BsonBinaryData(Convert.FromBase64String(keyIdBase64), BsonBinarySubType.UuidStandard); return new BsonDocument("keyId", new BsonArray(new] { keyId })); } private static BsonDocument CreateEncryptedField(string bsonType, bool isDeterministic) { return new BsonDocument { { "encrypt", new BsonDocument { { "bsonType", bsonType }, { "algorithm", isDeterministic ? DETERMINISTIC_ENCRYPTION_TYPE : RANDOM_ENCRYPTION_TYPE} } } }; } public static BsonDocument CreateJsonSchema(string keyId) { return new BsonDocument { { "bsonType", "object" }, { "encryptMetadata", CreateEncryptMetadata(keyId) }, { "properties", new BsonDocument { { "ssn", CreateEncryptedField("int", true) }, { "bloodType", CreateEncryptedField("string", false) }, { "medicalRecords", CreateEncryptedField("array", false) }, { "insurance", new BsonDocument { { "bsonType", "object" }, { "properties", new BsonDocument { { "policyNumber", CreateEncryptedField("int", true) } } } } } } } }; } } } ``` As before, let's step through each line: First, we create two static variables to hold our encryption types. We use `Deterministic` encryption for fields that are queryable and have high cardinality. We use `Random` encryption for fields we don't plan to query, have low cardinality, or are array fields. Next, we create a `CreateEncryptMetadata()` helper method. This will return a `BsonDocument` that contains our converted data key. We'll use this key in the `CreateJsonSchema()` method. Lines 19-32 make up another helper method called `CreateEncryptedField()`. This generates the proper `BsonDocument` needed to define our encrypted fields. It will output a `BsonDocument` that resembles the following: ``` json "ssn": { "encrypt": { "bsonType": "int", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } } ``` Finally, the `CreateJsonSchema()` method. Here, we generate the full schema our application will use to know which fields to encrypt and decrypt. This method also returns a `BsonDocument`. A few things to note about this schema: Placing the `encryptMetadata` key at the root of our schema allows us to encrypt all fields with a single data key. It's here you see the call to our `CreateEncryptMetadata()` helper method. Within the `properties` key go all the fields we wish to encrypt. So, for our `ssn`, `bloodType`, `medicalRecords`, and `insurance.policyNumber` fields, we generate the respective `BsonDocument` specifications they need using our `CreateEncryptedField()` helper method. With our encrypted fields defined and the necessary encryption keys generated, we can now move onto enabling client-side field level encryption in our MongoDB client! > ☕️ Don't forget to take a break! This is a lot of information to take > in, so don't rush. Be sure to save all your files, then grab a coffee, > stretch, and step away from the computer. This tutorial will be here > waiting when you're ready. :) ## Create the CSFLE-Enabled MongoDB Client A CSFLE-enabled `MongoClient` is not that much different from a standard client. To create an auto-encrypting client, we instantiate it with some additional auto-encryption options. As before, let's create a separate class to hold this functionality. Create a file called `AutoEncryptHelper.cs` and add the following code (note that since this is a bit longer than the other code snippets, I've opted to add inline comments to explain what's happening rather than waiting until after the code block): ``` csp // AutoEncryptHelper.cs using System; using System.Collections.Generic; using System.IO; using MongoDB.Bson; using MongoDB.Driver; using MongoDB.Driver.Encryption; namespace EnvoyMedSys { public class AutoEncryptHelper { private static readonly string __localMasterKeyPath = "../../../master-key.txt"; // Most of what follows are sample fields and a sample medical record we'll be using soon. private static readonly string __sampleNameValue = "Takeshi Kovacs"; private static readonly int __sampleSsnValue = 213238414; private static readonly BsonDocument __sampleDocFields = new BsonDocument { { "name", __sampleNameValue }, { "ssn", __sampleSsnValue }, { "bloodType", "AB-" }, { "medicalRecords", new BsonArray(new [] { new BsonDocument("weight", 180), new BsonDocument("bloodPressure", "120/80") }) }, { "insurance", new BsonDocument { { "policyNumber", 211241 }, { "provider", "EnvoyHealth" } } } }; // Scaffolding of some private variables we'll need. private readonly string _connectionString; private readonly CollectionNamespace _keyVaultNamespace; private readonly CollectionNamespace _medicalRecordsNamespace; // Constructor that will allow us to specify our auto-encrypting // client settings. This also makes it a bit easier to extend and // use with a remote KMS provider later on. public AutoEncryptHelper(string connectionString, CollectionNamespace keyVaultNamespace) { _connectionString = connectionString; _keyVaultNamespace = keyVaultNamespace; _medicalRecordsNamespace = CollectionNamespace.FromFullName("medicalRecords.patients"); } // The star of the show. Accepts a key location, // a key vault namespace, and a schema; all needed // to construct our CSFLE-enabled MongoClient. private IMongoClient CreateAutoEncryptingClient( KmsKeyLocation kmsKeyLocation, CollectionNamespace keyVaultNamespace, BsonDocument schema) { var kmsProviders = new Dictionary>(); // Specify the local master encryption key if (kmsKeyLocation == KmsKeyLocation.Local) { var localMasterKeyBase64 = File.ReadAllText(__localMasterKeyPath); var localMasterKeyBytes = Convert.FromBase64String(localMasterKeyBase64); var localOptions = new Dictionary { { "key", localMasterKeyBytes } }; kmsProviders.Add("local", localOptions); } // Because we didn't explicitly specify the collection our // JSON schema applies to, we assign it here. This will map it // to a database called medicalRecords and a collection called // patients. var schemaMap = new Dictionary(); schemaMap.Add(_medicalRecordsNamespace.ToString(), schema); // Specify location of mongocryptd binary, if necessary. // Not required if path to the mongocryptd.exe executable // has been added to your PATH variables var extraOptions = new Dictionary() { // Optionally uncomment the following line if you are running mongocryptd manually // { "mongocryptdBypassSpawn", true } }; // Create CSFLE-enabled MongoClient // The addition of the automatic encryption settings are what // transform this from a standard MongoClient to a CSFLE-enabled // one var clientSettings = MongoClientSettings.FromConnectionString(_connectionString); var autoEncryptionOptions = new AutoEncryptionOptions( keyVaultNamespace: keyVaultNamespace, kmsProviders: kmsProviders, schemaMap: schemaMap, extraOptions: extraOptions); clientSettings.AutoEncryptionOptions = autoEncryptionOptions; return new MongoClient(clientSettings); } } } ``` Alright, we're almost done. Don't forget to save what you have so far! In our next (and final) step, we can finally try out client-side field level encryption with some queries! > 🌟 Know what show this patient is from? Let me know your nerd cred (and > let's be friends, fellow fan!) in a > [tweet! ## Perform Encrypted Read/Write Operations Remember the sample data we've prepared? Let's put that to good use! To test out an encrypted write and read of this data, let's add another method to the `AutoEncryptHelper` class. Right after the constructor, add the following method: ``` csp // AutoEncryptHelper.cs public async void EncryptedWriteAndReadAsync(string keyIdBase64, KmsKeyLocation kmsKeyLocation) { // Construct a JSON Schema var schema = JsonSchemaCreator.CreateJsonSchema(keyIdBase64); // Construct an auto-encrypting client var autoEncryptingClient = CreateAutoEncryptingClient( kmsKeyLocation, _keyVaultNamespace, schema); var collection = autoEncryptingClient .GetDatabase(_medicalRecordsNamespace.DatabaseNamespace.DatabaseName) .GetCollection(_medicalRecordsNamespace.CollectionName); var ssnQuery = Builders.Filter.Eq("ssn", __sampleSsnValue); // Upsert (update document if found, otherwise create it) a document into the collection var medicalRecordUpdateResult = await collection .UpdateOneAsync(ssnQuery, new BsonDocument("$set", __sampleDocFields), new UpdateOptions() { IsUpsert = true }); if (!medicalRecordUpdateResult.UpsertedId.IsBsonNull) { Console.WriteLine("Successfully upserted the sample document!"); } // Query by SSN field with auto-encrypting client var result = collection.Find(ssnQuery).Single(); Console.WriteLine($"Encrypted client query by the SSN (deterministically-encrypted) field:\n {result}\n"); } ``` What's happening here? First, we use the `JsonSchemaCreator` class to construct our schema. Then, we create an auto-encrypting client using the `CreateAutoEncryptingClient()` method. Next, lines 14-16 set the working database and collection we'll be interacting with. Finally, we upsert a medical record using our sample data, then retrieve it with the auto-encrypting client. Prior to inserting this new patient record, the CSFLE-enabled client automatically encrypts the appropriate fields as established in our JSON schema. If you like diagrams, here's what's happening: When retrieving the patient's data, it is decrypted by the client. The nicest part about enabling CSFLE in your application is that the queries don't change, meaning the driver methods you're already familiar with can still be used. For the diagram people: To see this in action, we just have to modify the main program slightly so that we can call the `EncryptedWriteAndReadAsync()` method. Back in the `Program.cs` file, add the following code: ``` csp // Program.cs using System; using System.IO; using MongoDB.Driver; namespace EnvoyMedSys { public enum KmsKeyLocation { Local, } class Program { public static void Main() { var connectionString = "PASTE YOUR MONGODB CONNECTION STRING/ATLAS URI HERE"; var keyVaultNamespace = CollectionNamespace.FromFullName("encryption.__keyVault"); var kmsKeyHelper = new KmsKeyHelper( connectionString: connectionString, keyVaultNamespace: keyVaultNamespace); var autoEncryptHelper = new AutoEncryptHelper( connectionString: connectionString, keyVaultNamespace: keyVaultNamespace); string kmsKeyIdBase64; // Ensure GenerateLocalMasterKey() only runs once! if (!File.Exists("../../../master-key.txt")) { kmsKeyHelper.GenerateLocalMasterKey(); } kmsKeyIdBase64 = kmsKeyHelper.CreateKeyWithLocalKmsProvider(); autoEncryptHelper.EncryptedWriteAndReadAsync(kmsKeyIdBase64, KmsKeyLocation.Local); Console.ReadKey(); } } } ``` Alright, this is it! Save your files and then run your program. After a short wait, you should see the following console output: Console output of an encrypted write and read It works! The console output you see has been decrypted correctly by our CSFLE-enabled MongoClient. We can also verify that this patient record has been properly saved to our database. Logging into my Atlas cluster, I see Takeshi's patient record stored securely, with the specified fields encrypted: Encrypted patient record stored in MongoDB Atlas ## Bonus: What's the Difference with a Non-Encrypted Client? To see how these queries perform when using a non-encrypting client, let's add one more method to the `AutoEncryptHelper` class. Right after the `EncryptedWriteAndReadAsync()` method, add the following: ``` csp // AutoEncryptHelper.cs public void QueryWithNonEncryptedClient() { var nonAutoEncryptingClient = new MongoClient(_connectionString); var collection = nonAutoEncryptingClient .GetDatabase(_medicalRecordsNamespace.DatabaseNamespace.DatabaseName) .GetCollection(_medicalRecordsNamespace.CollectionName); var ssnQuery = Builders.Filter.Eq("ssn", __sampleSsnValue); var result = collection.Find(ssnQuery).FirstOrDefault(); if (result != null) { throw new Exception("Expected no document to be found but one was found."); } // Query by name field with a normal non-auto-encrypting client var nameQuery = Builders.Filter.Eq("name", __sampleNameValue); result = collection.Find(nameQuery).FirstOrDefault(); if (result == null) { throw new Exception("Expected the document to be found but none was found."); } Console.WriteLine($"Query by name (non-encrypted field) using non-auto-encrypting client returned:\n {result}\n"); } ``` Here, we instantiate a standard MongoClient with no auto-encryption settings. Notice that we query by the non-encrypted `name` field; this is because we can't query on encrypted fields using a MongoClient without CSFLE enabled. Finally, add a call to this new method in the `Program.cs` file: ``` csp // Program.cs // Comparison query on non-encrypting client autoEncryptHelper.QueryWithNonEncryptedClient(); ``` Save all your files, then run your program again. You'll see your last query returns an encrypted patient record, as expected. Since we are using a non CSFLE-enabled MongoClient, no decryption happens, leaving only the non-encrypted fields legible to us: Query output using a non CSFLE-enabled MongoClient. Since no decryption happens, the data is properly returned in an encrypted state. ## Let's Recap Cheers! You've made it this far! Really, pat yourself on the back. This was a serious tutorial! This tutorial walked you through: * Creating a .NET Core console application. * Installing dependencies needed to enable client-side field level encryption for your .NET core app. * Creating a local master key. * Creating a data encryption key. * Constructing a JSON Schema to establish which fields to encrypt. * Configuring a CSFLE-enabled MongoClient. * Performing an encrypted read and write of a sample patient record. * Performing a read using a non-CSFLE-enabled MongoClient to see the difference in the retrieved data. With this knowledge of client-side field level encryption, you should be able to better secure applications and understand how it works! > I hope this tutorial made client-side field level encryption simpler to > integrate into your .NET application! If you have any further questions > or are stuck on something, head over to the MongoDB Community > Forums and start a > topic. A whole community of MongoDB engineers (including the DevRel > team) and fellow developers are sure to help! In case you want to learn a bit more, here are the resources that were crucial to helping me write this tutorial: * Client-Side Field Level Encryption - .NET Driver * CSFLE Examples - .NET Driver * Client-Side Field Level Encryption - Security Docs * Automatic Encryption Rules
md
{ "tags": [ "C#", "MongoDB" ], "pageDescription": "Learn how to use MongoDB client-side field level encryption (CSFLE) with a C# application.", "contentType": "Code Example" }
How to Use MongoDB Client-Side Field Level Encryption (CSFLE) with C#
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/how-netlify-backfilled-2-million-documents
created
# How We Backfilled 2 Million Database Documents We recently needed to backfill nearly two million documents in our MongoDB database with a new attribute and wanted to share our process. First, some context on why we were doing this: This backfill was to support Netlify's Growth team, which builds prototypes into Netlify's core product and then evaluates how those prototypes impact user conversion and retention rates. If we find that a prototype positively impacts growth, we use that finding to shape deeper investments in a particular area of the product. In this case, to measure the impact of a prototype, we needed to add an attribute that didn't previously exist to one of our database models. With that out of the way, let's dive into how we did it! Backend engineer Eric Betts and I started with a script from a smaller version of this task: backfilling 130,000 documents. The smaller backfill had taken about 11 hours, including time to tweak the script and restart it a few times when it died. At a backfill rate of 175-200 documents per minute, we were looking at a best-case scenario of eight to nine days straight of backfilling for over two million total documents, and that's assuming everything went off without a hitch. With a much bigger backfill ahead of us, we needed to see if we could optimize. The starting script took two arguments—a `batch_size` and `thread_pool_size` size—and it worked like this: 1. Create a new queue. 2. Create a variable to store the number of documents we've processed. 3. Query the database, limiting returned results to the `batch_size` we passed in. 4. Push each returned document into the queue. 5. Create the number of worker threads we passed in with the `thread_pool_size` argument. 6. Each thread makes an API call to a third-party API, then writes our new attribute to our database with the result from the third-party API. 7. Update our count of documents processed. 8. When there are no more documents in the queue to process, clean up the threads. The script runs on a Kubernetes pod with memory and CPU constraints. It reads from our production MongoDB database and writes to a secondary. ## More repos, more problems When scaling up the original script to process 20 times the number of documents, we quickly hit some limitations: **Pod memory constraints.** Running the script with `batch_size` of two million documents and `thread_pool_size` of five was promptly killed by the Kubernetes pod: ```rb Backfill.run(2000000, 5) ``` **Too much manual intervention.** Running with `batch_size` of 100 and `thread_pool` of five worked much better: ```rb Backfill.run(100, 5) ``` It ran super fast 🚀 there were no errors ✨... but we would have had to manually run it 20,000 times. **Third-party API rate limits.** Even with a reliable `batch_size`, we couldn't crank the `thread_pool_size` too high or we'd hit rate limits at the third-party API. Our script would finish running, but many of our documents wouldn't actually be backfilled, and we'd have to iterate over them again. ## Brainstorming solutions Eric and I needed something that met the following criteria: * Doesn't use so much memory that it kills the Kubernetes pod. * Doesn't use so much memory that it noticeably increases database read/write latency. * Iterates through a complete batch of objects at a time; the job shouldn't die before at least attempting to process a full batch. * Requires minimal babysitting. Some manual intervention is okay, but we need a job to run for several hours by itself. * Lets us pick up where we left off. If the job dies, we don't want to waste time re-processing documents we've already processed once. With this list of criteria, we started brainstorming solutions. We could: 1. Dig into why the script was timing out before processing the full batch. 2. Store references to documents that failed to be updated, and loop back over them later. 3. Find a way to order the results returned by the database. 4. Automatically add more jobs to the queue once the initial batch was processed. ## Optimizations ### You're in time out #1 was an obvious necessity. We started logging the thread index to see if it would tell us anything: ```rb def self.run(batch_size, thread_pool_size) jobs = Queue.new # get all the documents that meet these criteria objs = Obj.where(...) # limit the returned objects to the batch_size objs = objs.limit(batch_size) # push each document into the jobs queue to be processed objs.each { |o| jobs.push o } # create a thread pool workers = (thread_pool_size).times.map do |i| Thread.new do begin while j = jobs.pop(true) # log the thread index and object ID Rails.logger.with_fields(thread: i, obj: obj.id) begin # process objects end ... ``` This new log line let us see threads die off as the script ran. We'd go from running with five threads: ``` thread="4" obj="939bpca..." thread="1" obj="939apca..." thread="5" obj="939cpca..." thread="2" obj="939dpca..." thread="3" obj="939fpca..." thread="4" obj="969bpca..." thread="1" obj="969apca..." thread="5" obj="969cpca..." thread="2" obj="969dpca..." thread="3" obj="969fpca..." ``` to running with a few: ``` thread="4" obj="989bpca..." thread="1" obj="989apca..." thread="4" obj="979bpca..." thread="1" obj="979apca..." ``` to running with none. We realized that when a thread would hit an error in an API request or a write to our database, we were rescuing and printing the error, but not continuing with the loop. This was a simple fix: When we `rescue`, continue to the `next` iteration of the loop. ```rb begin # process documents rescue next end ``` ### Order, order In a new run of the script, we needed a way to pick up where we left off. Idea #2—keeping track of failures across iterations of the script—was technically possible, but it wasn't going to be pretty. We expected idea #3—ordering the query results—to solve the same problem, but in a better way, so we went with that instead. Eric came up with the idea to order our query results by `created_at` date. This way, we could pass a `not_before` date argument when running the script to ensure that we weren't processing already-processed objects. We could also print each document's `created_at` date as it was processed, so that if the script died, we could grab that date and pass it into the next run. Here's what it looked like: ```rb def self.run(batch_size, thread_pool_size, not_before) jobs = Queue.new # order the query results in ascending order objs = Obj.where(...).order(created_at: -1) # get documents created after the not_before date objs = objs.where(:created_at.gte => not_before) # limit the returned documents to the batch_size objs = objs.limit(batch_size) # push each document into the jobs queue to be processed objs.each { |o| jobs.push o } workers = (thread_pool_size).times.map do |i| Thread.new do begin while j = jobs.pop(true) # log each document's created_at date as it's processed Rails.logger.with_fields(thread: i, obj: obj.id, created_at: obj.created_at) begin # process documents rescue next end ... ``` So a log line might look like: `thread="6" obj="979apca..." created_at="Wed, 11 Nov 2020 02:04:11.891000000 UTC +00:00"` And if the script died after that line, we could grab that date and pass it back in: `Backfill.run(50000, 10, "Wed, 11 Nov 2020 02:04:11.891000000 UTC +00:00")` Nice! Unfortunately, when we added the ordering, we found that we unintentionally introduced a new memory limitation: the query results were sorted in memory, so we couldn't pass in too large of a batch size or we'd run out of memory on the Kubernetes pod. This lowered our batch size substantially, but we accepted the tradeoff since it eliminated the possibility of redoing work that had already been done. ### The job is never done The last critical task was to make our queue add to itself once the original batch of documents was processed. Our first approach was to check the queue size, add more objects to the queue when queue size reached some threshold, and re-run the original query, but skip all the returned query results that we'd already processed. We stored the number we'd already processed in a variable called `skip_value`. Each time we added to the queue, we would increase `skip_value` and skip an increasingly large number of results. You can tell where this is going. At some point, we would try to skip too large of a value, run out of memory, fail to refill the queue, and the job would die. ```rb skip_value = batch_size step = batch_size loop do if jobs.size < 1000 objs = Obj.where(...).order(created_at: -1) objs = objs.where(:created_at.gte => created_at) objs = objs.skip(skip_value).limit(step) # <--- job dies when this skip_value gets too big ❌ objs.each { |r| jobs.push r } skip_value += step # <--- this keeps increasing as we process more objects ❌ if objs.count == 0 break end end end ``` We ultimately tossed out the increasing `skip_value`, opting instead to store the `created_at` date of the last object processed. This way, we could skip a constant, relatively low number of documents instead of slowing down and eventually killing our query by skipping an increasing number: ```rb refill_at = 1000 step = batch_size loop do if jobs.size < refill_at objs = Obj.where(...).order(created_at: -1) objs = objs.where(:created_at.gte => last_created_at) # <--- grab last_created_at constant from earlier in the script ✅ objs = objs.skip(refill_at).limit(step) # <--- skip a constant amount ✅ objs.each { |r| jobs.push r } if objs.count == 0 break end end end ``` So, with our existing loop to create and kick off the threads, we have something like this: ```rb def self.run(batch_size, thread_pool_size, not_before) jobs = Queue.new objs = Obj.where(...).order(created_at: -1) objs = objs.where(:created_at.gte => not_before) objs = objs.limit(step) objs.each { |o| jobs.push o } updated = 0 last_created_at = "" # <--- we update this variable... workers = (thread_pool_size).times.map do |i| Thread.new do begin while j = jobs.pop(true) Rails.logger.with_fields(thread: i, obj: obj.id, created_at: obj.created_at) begin # process documents updated += 1 last_created_at = obj.created_at # <--- ...with each document processed rescue next end end end end end loop do skip_value = batch_size step = 10000 if jobs.size < 1000 objs = Obj.where(...).order(created: -1) objs = objs.where(:created_at.gte => not_before) objs = objs.skip(skip_value).limit(step) objs.each { |r| jobs.push r } skip_value += step if objs.count == 0 break end end end workers.map(&:join) end ``` With this, we were finally getting the queue to add to itself when it was done. But the first time we ran this, we saw something surprising. The initial batch of 50,000 documents was processed quickly, and then the next batch that was added by our self-adding queue was processed very slowly. We ran `top -H` to check CPU and memory usage of our script on the Kubernetes pod and saw that it was using 90% of the system's CPU: Adding a few `sleep` statements between loop iterations helped us get CPU usage down to a very reasonable 6% for the main process. With these optimizations ironed out, Eric and I were able to complete our backfill at a processing rate of 800+ documents/minute with no manual intervention. Woohoo!
md
{ "tags": [ "MongoDB", "Kubernetes" ], "pageDescription": "Learn how the Netlify growth team reduced the time it took to backfill nearly two million documents in our MongoDB database with a new attribute.", "contentType": "Tutorial" }
How We Backfilled 2 Million Database Documents
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/rust/rust-field-level-encryption
created
# MongoDB Field Level Encryption is now Available for Rust applications We have some exciting news to announce for Rust developers. Our 2.4.1 release of the MongoDB Rust driver brings a raft of new, innovative features for developers building Rust applications. ## Field Level Encryption for Rust Applications This one has been a long time coming. The 2.4.1 version of the MongoDB Rust driver contains field level encryption capabilities - both client side field level encryption and queryable encryption. Starting with MongoDB 4.2, client-side field level encryption allows an application to encrypt specific data fields in addition to pre-existing MongoDB encryption features such as Encryption at Rest and TLS/SSL (Transport Encryption). With field level encryption, applications can encrypt fields in documents prior to transmitting data over the wire to the server. Client-side field level encryption supports workloads where applications must guarantee that unauthorized parties, including server administrators, cannot read the encrypted data. For more information, see the Encryption section of the Rust driver documentation. ## GridFS Rust Support The 2.4.1 release of the MongoDB Rust driver also (finally!) added support for GridFS, allowing storage and retrieval of files that exceed the BSON document size limit. ## Tracing Support This release had one other noteworthy item in it - the driver now emits tracing events at points of interest. Note that this API is considered unstable as the tracing crate has not reached 1.0 yet; future minor versions of the driver may upgrade the tracing dependency to a new version which is not backwards-compatible with Subscribers that depend on older versions of tracing. You can read more about tracing from the crates.io documentation here. ## Install the MongoDB Rust Driver To check out these new features, you'll need to install the MongoDB Rust driver, which is available on crates.io. To use the driver in your application, simply add it to your project's Cargo.toml. ``` [dependencies] mongodb = "2.4.0-beta" ```
md
{ "tags": [ "Rust" ], "pageDescription": "MongoDB now support field level encryption for Rust applications", "contentType": "Article" }
MongoDB Field Level Encryption is now Available for Rust applications
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/queryable-encryption-james-bond
created
# How Queryable Encryption Can Keep James Bond Safe Companies of all sizes are continuing to embrace the power of data. With that power, however, comes great responsibility — namely, the responsibility to protect that data and customers, comply with data privacy regulations, and to control and limit access to confidential and regulated data. Though existing encryption solutions, both in-transit and at-rest, do cover many of the use cases above, none of them protect sensitive data while it’s in use. However, in-use encryption is often a requirement for high-sensitivity workloads, particularly for customers in financial services, healthcare, and critical infrastructure organizations. Queryable Encryption, a new feature from MongoDB currently in **preview**, offers customers a way to encrypt sensitive data and keep it encrypted throughout its entire lifecycle, whether it’s in memory, logs, in-transit, at-rest, or in backups. You can now encrypt sensitive data on the client side, store it as fully randomized encrypted data on the server side, and run expressive queries on that encrypted data. Data is never in cleartext in the database, but MongoDB can still process queries and execute operations on the server side. Find more details on Queryable Encryption. ## Setting up Queryable Encryption with Java There are two ways to set up Queryable Encryption. You can either go the automatic encryption route, which allows you to perform encrypted reads and writes without needing to write code specifying how the fields should be encrypted, or you could go the manual route, which means you’ll need to specify the logic for encryption. To use Queryable Encryption with Java, you’ll need 4.7.0-beta0 (or later) of the Java driver, and version 1.5.0-rc2 (or later) of MongoCrypt. You’ll also need either MongoDB Atlas or MongoDB Enterprise if you want to use automatic encryption. If you don’t have Atlas or Enterprise, no worries! You can get a free forever cluster on Atlas by registering. Once you’ve completed those prerequisites, you can set up Queryable Encryption and specify which fields you’d like to encrypt. Check out the quick start to learn more. ## Okay, but what does this have to do with James Bond? Let’s explore the following use case. Assume, for a moment, that you work for a top-secret company and you’re tasked with keeping the identities of your employees shrouded in secrecy. The below code snippet represents a new employee, James Bond, who works at your company: ``` Document employee = new Document()        .append("firstName", "James")        .append("lastName", "Bond")        .append("employeeId", 1006)        .append("address", "30 Wellington Sq"); ``` The document containing James Bond’s information is added to an “employees” collection that has two encrypted fields, **employeeId** and **address**. Learn more about encrypted fields. Assuming someone, maybe Auric Goldfinger, wanted to find James Bond’s address but didn’t have access to an encrypted client, they’d only be able to see the following: ``` “firstName” : “James”, “lastName” : “Bond”, "employeeId": {"$binary": {"base64": "B5XwlQMzFkOmmW0VTcE1QhoQ/ZYHhyMqItvaD+J9AfsAFf1koD/TaYpJG/sCOugnDlE7b4K+mciP63k+RdxMw4OVhYUhsCkFPrhvMtk0l8bekyYWhd8Leky+mcNTy547dJF7c3WdaIumcKIwGKJ7vN0Zs78pcA+86SKOA3LCnojK4Zdewv4BCwQwsqxgEAWyDaT9oHbXiUJDae7s+EWj+ZnfZWHyYJNR/oZtaShrooj2CnlRPK0RRInV3fGFzKXtiOJfxXznYXJ//D0zO4Bobc7/ur4UpA==", "subType": "06"}}, "address": {"$binary": {"base64": "Biue77PFDUA9mrfVh6jmw6ACi4xP/AO3xvBcQRCp7LPjh0V1zFPU1GntlyWqTFeHfBARaEOuXHRs5iRtD6Ha5v5EjRWZ9nufHgg6JeMczNXmYo7sOaDJ", "subType": "06"}} ``` Of the four fields in my document, the last two remained encrypted (**employeeId** and **address**). Because Auric’s client was unencrypted, he wasn’t able to access James Bond’s address. However, if Auric were using an encrypted client, he’d be able to see the following: ``` "firstName": "James",  "lastName": "Bond",  "employeeId": 1006,  "address": "30 Wellington Sq" ``` …and be able to track down James Bond. ### Summary Of course, my example with James Bond is fictional, but I hope that it illustrates one of the many ways that Queryable Encryption can be helpful. For more details, check out our docs or the following helpful links: * Supported Operations for Queryable Encryption * Driver Compatibility Table * Automatic Encryption Shared Library If you run into any issues using Queryable Encryption, please let us know in Community Forums or by filing tickets on the JAVA project. Happy encrypting!
md
{ "tags": [ "MongoDB", "Java" ], "pageDescription": "Learn more about Queryable Encryption and how it could keep one of literature's legendary heroes safe.", "contentType": "Article" }
How Queryable Encryption Can Keep James Bond Safe
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/mongodb/strapi-headless-cms-with-atlas
created
# Use MongoDB as the Data Store for your Strapi Headless CMS The modern web is evolving quickly and one of the best innovations in recent years is the advent of Headless CMS frameworks. I believe that Headless CMS systems will do for content what RESTful APIs did for SaaS. The idea is simple: You decouple content creation and management from the presentation layer. You then expose the content through either RESTful or GraphQL APIs to be consumed by the front end. Headless CMS frameworks work especially well with static site generators which have traditionally relied on simple markdown files for content management. This works great for a small personal blog, for example, but quickly becomes a management mess when you have multiple authors, many different types of content, and ever-changing requirements. A Headless CMS system takes care of content organization and creation while giving you flexibility on how you want to present the content. Today, we are going to look at an open-source Headless CMS called Strapi. Strapi comes from the word "bootstrap," and helps bootSTRAP your API. In this post, we'll look at some of the features of Strapi and how it can help us manage our content as well as how we can combine it with MongoDB to have a modern content management platform. ## Prerequisites For this tutorial, you'll need: - Node.js - npm - MongoDB You can download Node.js here, and it will come with the latest version of npm and npx. For MongoDB, use MongoDB Atlas for free. ## What is Strapi? Strapi is an open-source Headless CMS framework. It is essentially a back-end or admin panel for content creation. It allows developers to easily define a custom content structure and customize it fully for their use case. The framework has a really powerful plug-in system for making content creation and management painless regardless of your use-case. In this tutorial, we'll set up and configure Strapi. We'll do it in two ways. First, we'll do a default install to quickly get started and show off the functionality of Strapi, and then we'll also create a second instance that uses MongoDB as the database to store our content. ## Bootstrapping Strapi To get started with Strapi, we'll execute a command in our terminal using npx. If you have a recent version of Node and npm installed, npx will already be installed as well so simply execute the following command in a directory where you want your Strapi app to live: ``` bash npx create-strapi-app my-project --quickstart ``` Feel free to change the `my-project` name to a more suitable option. The `--quickstart` argument will use a series of default configuration options to get you up and running as quickly as possible. The npx command will take some time to run and download all the packages it needs, and once it's done, it will automatically start up your Strapi app. If it does not, navigate to the `my-project` directory and run: ``` bash npm run develop ``` This will start the Strapi server. When it is up and running, navigate to `localhost:1337` in your browser and you'll be greeted with the following welcome screen: Fill out the information with either real or fake data and you'll be taken to your new dashboard. If you see the dashboard pictured above, you are all set! When we passed the `--quickstart` argument in our npx command, Strapi created a SQLite database to use and store our data. You can find this database file if you navigate to your `my-project` directory and look in the `.tmp` directory. Feel free to mess around in the admin dashboard to familiarize yourself with Strapi. Next, we're going to rerun our creation script, but this time, we won't pass the `--quickstart` argument. We'll have to set a couple of different configuration items, primarily our database config. When you're ready proceed to the next section. ## Bootstrapping Strapi with MongoDB Before we get into working with Strapi, we'll re-run the installation script and change our database provider from the default SQLite to MongoDB. There are many reasons why you'd want to use MongoDB for your Strapi app, but one of the most compelling ones to me is that many virtual machines are ephemeral, so if you're installing Strapi on a VM to test it out, every time you restart the app, that SQLite DB will be gone and you'll have to start over. Now then, let's go ahead and stop our Strapi app from running and delete the `my-project` folder. We'll start clean. After you've done this, run the following command: ``` bash npx create-strapi-app my-project ``` After a few seconds you'll be prompted to choose an installation type. You can choose between **Quickstart** and **Custom**, and you'll want to select **Custom**. Next, for your database client select **MongoDB**, in the CLI it may say **mongo**. For the database name, you can choose whatever name makes sense to you, I'll go with **strapi**. You do not already have to have a database created in your MongoDB Atlas instance, Strapi will do this for you. Next, you'll be prompted for the Host URL. If you're running your MongoDB database on Atlas, the host will be unique to your cluster. To find it, go to your MongoDB Atlas dashboard, navigate to your **Clusters** tab, and hit the **Connect** button. Choose any of the options and your connection string will be displayed. It will be the part highlighted in the image below. Add your connection string, and the next option you'll be asked for will be **+srv connection** and for this, you'll say **true**. After that, you'll be asked for a Port, but you can ignore this since we are using a `srv` connection. Finally, you will be asked to provide your username and password for the specific cluster. Add those in and continue. You'll be asked for an Authentication database, and you can leave this blank and just hit enter to continue. And at the end of it all, you'll get your final question asking to **Enable SSL connection** and for this one pass in **y** or **true**. Your terminal window will look something like this when it's all said and done: ``` none Creating a new Strapi application at C:\Users\kukic\desktop\strapi\my-project. ? Choose your installation type Custom (manual settings) ? Choose your default database client mongo ? Database name: strapi ? Host: {YOUR-MONGODB-ATLAS-HOST} ? +srv connection: true ? Port (It will be ignored if you enable +srv): 27017 ? Username: ado ? Password: ****** ? Authentication database (Maybe "admin" or blank): ? Enable SSL connection: (y/N) Y ``` Once you pass the **Y** argument to the final question, npx will take care of the rest and create your Strapi app, this time using MongoDB for its data store. To make sure everything works correctly, once the install is done, navigate to your project directory and run: ``` bash npm run develop ``` Your application will once again run on `localhost:1337` and you'll be greeted with the familiar welcome screen. To see the database schema in MongoDB Atlas, navigate to your dashboard, go into the cluster you've chosen to install the Strapi database, and view its collections. By default it will look like this: ## Better Content Management with Strapi Now that we have Strapi set up to use MongoDB as our database, let's go into the Strapi dashboard at `localhost:1337/admin` and learn to use some of the features this Headless CMS provides. We'll start by creating a new content type. Navigate to the **Content-Types Builder** section of the dashboard and click on the **Create New Collection Type** button. A collection type is, as the name implies, a type of content for your application. It can be a blog post, a promo, a quick-tip, or really any sort of content you need for your application. We'll create a blog post. The first thing we'll need to do is give it a name. I'll give my blog posts collection the very creative name of **Posts**. Once we have the name defined, next we'll add a series of fields for our collection. This is where Strapi really shines. The default installation gives us many different data types to work with such as text for a title or rich text for the body of a blog post, but Strapi also allows us to create custom components and even customize these default types to suit our needs. My blog post will have a **Title** of type **Text**, a **Content** element for the content of the post of type **Rich Text**, and a **Published** value of type **Date** for when the post is to go live. Feel free to copy my layout, or create your own. Once you're satisfied hit the save button and the Strapi server will restart and you'll see your new collection type in the main navigation. Let's go ahead and create a few posts for our blog. Now that we have some posts created, we can view the content both in the Strapi dashboard, as well as in our MongoDB Atlas collections view. Notice in MongoDB Atlas that a new collection called **posts** was created and that it now holds the blog posts we've written. We are only scratching the surface of what's available with Strapi. Let me show you one more powerful feature of Strapi. - Create a new Content Type, call it **Tags**, and give it only one field called **name**. - Open up your existing Posts collection type and hit the **Add another field** button. - From here, select the field type of **Relation**. - On the left-hand side you'll see Posts, and on the right hand click the dropdown arrow and find your new **Tags** collection and select it. - Finally, select the last visual so that it says **Post has many Tags** and hit **Finish**. Notice that some of the options are traditional 1\:1, 1\:M, M\:M relationships that you might remember from the traditional RDBMS world. Note that even though we're using MongoDB, these relationships will be correctly represented so you don't have to worry about the underlying data model. Go ahead and create a few entries in your new Tags collection, and then go into an existing post you have created. You'll see the option to add `tags` to your post now and you'll have a dropdown menu to choose from. No more guessing what the tag should be... is it NodeJS, or Node.Js, maybe just Node? ## Accessing Strapi Content So far we have created our Strapi app, created various content types, and created some content, but how do we make use of this content in the applications that are meant to consume it? We have two options. We can expose the data via RESTful endpoints, or via GraphQL. I'll show you both. Let's first look at the RESTful approach. When we create a new content type Strapi automatically creates an accompanying RESTFul endpoint for us. So we could access our posts at `localhost:1337/posts` and our tags at `localhost:1337/tags`. But not so fast, if we try to navigate to either of these endpoints we'll be treated with a `403 Forbidden` message. We haven't made these endpoints publically available. To do this, go into the **Roles & Permissions** section of the Strapi dashboard, select the **Public** role and you'll see a list of permissions by feature and content type. By default, they're all disabled. For our demo, let's enable the **count**, **find**, and **findOne** permissions for the **Posts** and **Tags** collections. Now if you navigate to `localhost:1337/posts` or `localhost:1337:tags` you'll see your content delivered in JSON format. To access our content via GraphQL, we'll need to enable the GraphQL plugin. Navigate to the **Marketplace** tab in the Strapi dashboard and download the GraphQL plugin. It will take a couple of minutes to download and install the plugin. Once it is installed, you can access all of your content by navigating to `localhost:1337/graphql`. You'll have to ensure that the Roles & Permissions for the different collections are available, but if you've done the RESTful example above they will be. We get everything we'd expect with the GraphQL plugin. We can view our entire schema and docs, run queries and mutations and it all just works. Now we can easily consume this data with any front-end. Say we're building an app with Gatsby or Next.js, we can call our endpoint, get all the data and generate all the pages ahead of time, giving us best-in-class performance as well as content management. ## Putting It All Together In this tutorial, I introduced you to Strapi, one of the best open-source Headless CMS frameworks around. I covered how you can use Strapi with MongoDB to have a permanent data store, and I covered various features of the Strapi framework. Finally, I showed you how to access your Strapi content with both RESTful APIs as well as GraphQL. If you would like to see an article on how we can consume our Strapi content in a static website generator like Gatsby or Hugo, or how you can extend Strapi for your use case let me know in the MongoDB Community forums, and I'll be happy to do a write-up! >If you want to safely store your Strapi content in MongoDB, sign up for MongoDB Atlas for free. Happy content creation!
md
{ "tags": [ "MongoDB", "JavaScript", "Node.js" ], "pageDescription": "Learn how to use MongoDB Atlas as a data store for your Strapi Headless CMS.", "contentType": "Tutorial" }
Use MongoDB as the Data Store for your Strapi Headless CMS
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/integrate-atlas-application-services-logs-datadog-aws
created
# Integrate Atlas Application Services Logs into Datadog on AWS Datadog is a well-known monitoring and security platform for cloud applications. Datadog’s software-as-a-service (SaaS) platform integrates and automates infrastructure monitoring, application performance monitoring, and log management to provide unified, real-time observability of a customer’s entire technology stack. MongoDB Atlas on Amazon Web Services (AWS) already supports easy integration with Datadog for alerts and events right within the Atlas UI (select the three vertical dots → Integration → Datadog). With the Log Forwarding feature, it's now possible to send Atlas Application Services logs to Datadog. This blog outlines the configuration steps necessary as well as strategies for customizing the view to suit the need. **Atlas Application Services** (formerly MongoDB Realm) is a set of enhanced services that complement the Atlas database to simplify the development of backend applications. App Services-based apps can react to changes in your MongoDB Atlas data, connect that data to other systems, and scale to meet demand without the need to manage the associated server infrastructure. App Services provides user authentication and management, schema validation and data access rules, event-driven serverless functions, secure client-side queries with HTTPS Endpoints, and best of all, synchronization of data across devices with the Realm Mobile SDK. With App Services and Datadog, you can simplify the end-to-end development and monitoring of your application. **Atlas App Services** specifically enables the forwarding of logs to Datadog via a serverless function that can also give more fine-grained control over how these logs appear in Datadog, via customizing the associated tags. ## Atlas setup We assume that you already have an Atlas account. If not, you can sign up for a free account on MongoDB or the AWS Marketplace. Once you have an Atlas account, if you haven't had a chance to try App Services with Atlas, you can follow one of our tutorials to get a running application working quickly. To initiate custom log forwarding, follow the instructions for App Services to configure log forwarding. Specifically, choose the “To Function” option: Within Atlas App Services, we can create a custom function that provides the mapping and ingesting logs into Datadog. Please note the intake endpoint URL from Datadog first, which is documented by Datadog. Here’s a sample function that provides that basic capability: ``` exports = async function(logs) { // `logs` is an array of 1-100 log objects // Use an API or library to send the logs to another service. await context.http.post({ url: "https://http-intake.logs.datadoghq.com/api/v2/logs", headers: { "DD-API-KEY": "XXXXXX"], "DD-APPLICATION-KEY": ["XXXXX"], "Content-Type": ["application/json"] }, body: logs.map(x => {return { "ddsource": "mongodb.atlas.app.services", "ddtags": "env:test,user:igor", "hostname": "RealmApp04", "service": "MyRealmService04", "message" : JSON.stringify(x) }}), encodeBodyAsJSON: true }); } ``` One of the capabilities of the snippet above is that it allows you to modify the function to supply your Datadog [API and application keys. This provides the capability to customize the experience and provide the appropriate context for better observability. You can change ddtags, the hostname, and service parameters to reflect your organization, team, environment, or application structure. These parameters will appear as facets helping with filtering the logs. Note: Datadog supports log ingestion pipelines that allow it to better parse logs. In order for the MongoDB log pipeline to work, your *ddsource* must be set to `mongodb.atlas.app.services`. ## Viewing the logs in Datadog Once log forwarding is configured, your Atlas App Services logs will appear in the Datadog Logs module. You can click on an individual log entry to see the detailed view: ## Conclusion In this blog, we showed how to configure log forwarding for Atlas App Services logs. If you would like to try configuring log forwarding yourself, sign up for a 14-day free trial of Datadog if you don’t already have an account. To try Atlas App Services in AWS Marketplace, sign up for a free account.
md
{ "tags": [ "Atlas", "AWS" ], "pageDescription": "With the Log Forwarding feature, it's now possible to send Atlas Application Services logs to Datadog. This blog outlines the configuration steps necessary as well as strategies for customizing the view to suit the need.", "contentType": "Tutorial" }
Integrate Atlas Application Services Logs into Datadog on AWS
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/connectors/mastering-ops-manager
created
# Mastering MongoDB Ops Manager on Kubernetes This article is part of a three-parts series on deploying MongoDB across multiple Kubernetes clusters using the operators. - Deploying the MongoDB Enterprise Kubernetes Operator on Google Cloud - Mastering MongoDB Ops Manager - Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti Managing MongoDB deployments can be a rigorous task, particularly when working with large numbers of databases and servers. Without the right tools and processes in place, it can be time-consuming to ensure that these deployments are running smoothly and efficiently. One significant issue in managing MongoDB clusters at scale is the lack of automation, which can lead to time-consuming and error-prone tasks such as backups, recovery, and upgrades. These tasks are crucial for maintaining the availability and performance of your clusters. Additionally, monitoring and alerting can be a challenge, as it may be difficult to identify and resolve issues with your deployments. To address these problems, it's essential to use software that offers monitoring and alerting capabilities. Optimizing the performance of your deployments also requires guidance and support from the right sources. Finally, it's critical for your deployments to be secure and compliant with industry standards. To achieve this, you need features that can help you determine if your deployments meet these standards. MongoDB Ops Manager is a web-based application designed to assist with the management and monitoring of MongoDB deployments. It offers a range of features that make it easier to deploy, manage, and monitor MongoDB databases, such as: - Automated backups and recovery: Ops Manager can take automated backups of your MongoDB deployments and provide options for recovery in case of failure. - Monitoring and alerting: Ops Manager provides monitoring and alerting capabilities to help identify and resolve issues with your MongoDB deployments. - Performance optimization: Ops Manager offers tools and recommendations to optimize the performance of your MongoDB deployments. - Upgrade management: Ops Manager can help you manage and plan upgrades to your MongoDB deployments, including rolling upgrades and backups to ensure data availability during the upgrade process. - Security and compliance: Ops Manager provides features to help you secure your MongoDB deployments and meet compliance requirements. However, managing Ops Manager can be a challenging task that requires a thorough understanding of its inner workings and how it interacts with the internal MongoDB databases. It is necessary to have the knowledge and expertise to perform upgrades, monitor it, audit it, and ensure its security. As Ops Manager is a crucial part of managing the operation of your MongoDB databases, its proper management is essential. Fortunately, the MongoDB Enterprise Kubernetes Operator enables us to run Ops Manager on Kubernetes clusters, using native Kubernetes capabilities to manage Ops Manager for us, which makes it more convenient and efficient. ## Kubernetes: MongoDBOpsManager custom resource The MongoDB Enterprise Kubernetes Operator is software that can be used to deploy Ops Manager and MongoDB resources to a Kubernetes cluster, and it's responsible for managing the lifecycle of each of these deployments. It has been developed based on years of experience and expertise, and it's equipped with the necessary knowledge to properly install, upgrade, monitor, manage, and secure MongoDB objects on Kubernetes. The Kubernetes Operator uses the MongoDBOpsManager custom resource to manage Ops Manager objects. It constantly monitors the specification of the custom resource for any changes and, when changes are detected, the operator validates them and makes the necessary updates to the resources in the Kubernetes cluster. MongoDBOpsManager custom resources specification defines the following Ops Manager components: - The Application Database - The Ops Manager application - The Backup Daemon When you use the Kubernetes Operator to create an instance of Ops Manager, the Ops Manager MongoDB Application Database will be deployed as a replica set. It's not possible to configure the Application Database as a standalone database or a sharded cluster. The Kubernetes Operator automatically sets up Ops Manager to monitor the Application Database that powers the Ops Manager Application. It creates a project named  `-db` to allow you to monitor the Application Database deployment. While Ops Manager monitors the Application Database deployment, it does not manage it. When you deploy Ops Manager, you need to configure it. This typically involves using the configuration wizard. However, you can bypass the configuration wizard if you set certain essential settings in your object specification before deployment. I will demonstrate that in this post. The Operator automatically enables backup. It deploys a StatefulSet, which consists of a single pod, to host the Backup Daemon Service and creates a Persistent Volume Claim and Persistent Volume for the Backup Daemon's head database. The operator uses the Ops Manager API to enable the Backup Daemon and configure the head database. ## Getting started Alright, let's get started using the operator and build something! For this tutorial, we will need the following tools:  - gcloud  - gke-cloud-auth-plugin - Helm - kubectl - kubectx - git To get started, we should first create a Kubernetes cluster and then install the MongoDB Kubernetes Operator on the cluster. Part 1 of this series provides instructions on how to do so. > **Note** > For the sake of simplicity, we are deploying Ops Manager in the same namespace as our MongoDB Operator. In a production environment, you should deploy Ops Manager in its own namespace. ### Environment pre-checks  Upon successful creation of a cluster and installation of the operator (described in Part 1), it's essential to validate their readiness for use. ```bash gcloud container clusters list NAME                LOCATION       MASTER_VERSION    NUM_NODES  STATUS\ master-operator     us-south1-a    1.23.14-gke.1800      4      RUNNING ``` Display our new Kubernetes full cluster name using `kubectx`. ```bash kubectx ``` You should see your cluster listed here. Make sure your context is set to master cluster. ```bash kubectx $(kubectx | grep "master-operator" | awk '{print $1}') ``` In order to continue this tutorial, make sure that the operator is in the `running`state. ```bash kubectl get po -n "${NAMESPACE}" NAME                                    READY   STATUS   RESTARTS   AGE\ mongodb-enterprise-operator-649bbdddf5   1/1    Running   0         7m9s ``` ## Using the MongoDBOpsManager CRD Create a secret containing the username and password on the master Kubernetes cluster for accessing the Ops Manager user interface after installation. ```bash kubectl -n "${NAMESPACE}" create secret generic om-admin-secret \ --from-literal=Username="[email protected]" \ --from-literal=Password="p@ssword123" \ --from-literal=FirstName="Ops" \ --from-literal=LastName="Manager" ``` ### Deploying Ops Manager  Then, we can deploy Ops Manger on the master Kubernetes cluster with the help of `opsmanagers` Custom Resource, creating `MongoDBOpsManager` object, using the following manifest: ```bash OM_VERSION=6.0.5 APPDB_VERSION=5.0.5-ent kubectl apply -f - <      27017/TCP ops-manager-svc     ClusterIP    None       
md
{ "tags": [ "Connectors", "Kubernetes" ], "pageDescription": "Learn how to deploy the MongoDB Ops Manager in a Kubernetes cluster with the MongoDB Kubernetes Operators.", "contentType": "Tutorial" }
Mastering MongoDB Ops Manager on Kubernetes
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/analyzing-analyzers-build-search-index-app
created
# Analyzing Analyzers to Build the Right Search Index for Your App **“Why am I not getting the right search results?”** So, you’ve created your first search query. You are familiar with various Atlas Search operators. You may have even played around with score modifiers to sort your search results. Yet, typing into that big, beautiful search bar still isn’t bringing you the results you expect from your data. Well, It just might be your search index definition. Or more specifically, your analyzer. You may know Lucene analyzers are important—but why? How do they work? How do you choose the right one? If this is you, don’t worry. In this tutorial, we will analyze analyzers—more specifically, Atlas Search indexes and the Lucene analyzers used to build them. We’ll define what they are exactly and how they work together to bring you the best results for your search queries. Expect to explore the following questions: * What is a search index and how is it different from a traditional MongoDB index? * What is an analyzer? What kinds of analyzers are built into Atlas and how do they compare to affect your search results? * How can you create an Atlas Search index using different search analyzers? We will even offer you a nifty web tool as a resource to demonstrate a variety of different use cases with analyzers and allow you to test your own sample. By the end, cured of your search analysis paralysis, you’ll brim with the confidence and knowledge to choose the right analyzers to create the best Atlas Search index for your application. ## What is an index? So, what’s an index? Generally, indexes are special data structures that enable ultra-fast querying and retrieval of documents based on certain identifiers.  Every Atlas Search query requires a search index. Actually, it’s the very first line of every Atlas Search query. If you don’t see one written explicitly, the query will use the default search index. Whereas a typical MongoDB index is a b-tree index, Atlas Search uses inverted indexes, which are much faster, flexible, and more powerful for text. Let’s explore the differences by walking through an example. Say we have a set of MongoDB documents that look like this: Each document has an “\_id” field as a unique identifier for every MongoDB document and the “s” field of text. MongoDB uses the \_id field to create the collection’s unique default index. Developers may also create other MongoDB indexes specific to their application’s querying needs. If we were to search through these documents’ sentence fields for the text: **“It was the best of times, it was the worst of times.”** -A Tale of Two Cities, Charles Dickens Atlas Search would break down this text data into these seven individual terms for our inverted index : **it - was - the - best - of - times - worst**  Next, Atlas Search would map these terms back to the original MongoDB documents’ \_id fields as seen below. The word “it” can be found in document with \_id 4.  Find “the”  in documents 2, 3, 4, etc. So essentially, an inverted index is a mapping between terms and which documents contain those terms. The inverted index contains the term and the \_id of the document, along with other relevant metadata, such as the position of the term in the document. You can think about the inverted index as analogous to the index you might find in the back of the book. Remember how book indexes contain words or expressions and list the pages in the book where they are found?  📖📚 Well, these inverted indexes use these terms to point to the specific documents in your database. Imagine if you are looking for Lady MacBeth’s utterance of “Out, damned spot” in Shakespeare’s MacBeth. You wouldn’t start at page one and read through the entire play, would you? I would go straight to the index to pinpoint it in Act 5, Scene 1, and even the exact page. Inverted indexes make text searches much faster than a traditional search because you are not searching through every single document at query time. You are instead querying the search index which was mapped upon index creation. Then, following the roadmap with the \_id to the exact data document(s) is fast and easy. ## What are analyzers? How does our metaphorical book decide which words or expressions to list in the back? Or for Atlas Search specifically, how do we know what terms to put in our Search indexes? Well, this is where *analyzers* come into play. To make our corpus of data searchable, we transform it into terms or “tokens” through a process called “analysis” done by analyzers. In our Charles Dickens example, we broke apart, “It was the best of times, it was the worst of times,” by removing the punctuation, lowercasing the words, and breaking the text apart at the non-letter characters to obtain our terms. These rules are applied by the lucene.standard analyzer, which is Atlas Search’s default analyzer. Atlas Search offers other analyzers built-in, too. A whitespace analyzer will keep your casing and punctuation but will split the text into tokens at only the whitespaces. The English analyzer takes a bit of a heavier hand when tokenizing. It removes common STOP words for English. STOP words are common words like “the,”  “a,”  “of,” and  “and” that you find often but may make the results of your searches less meaningful. In our Dickens example, we remove the “it,” “was,” and “the.” Also, it understands plurals and “stemming” words to their most reduced form. Applying the English analyzer leaves us with only the following three tokens: **\- best \- worst \- time** Which maps as follows: Notice you can’t find “the” or “of” with the English analyzer because those stop words were removed in the analysis process. ## The Analyzer Analyzer Interesting, huh? 🤔  Want a  deeper analyzer analysis? Check out AtlasSearchIndexes.com. Here you’ll find a basic tool to compare some of the various analyzers built into Atlas: | | | | --- | --- | | **Analyzer** | **Text Processing Description** | | Standard | Lowercase, removes punctuation, keeps accents | | English | Lowercase, removes punctuation and stop words, stems to root, pluralization, and possessive | | Simple | Lowercase, removes punctuation, separates at non-letters | | Whitespace | Keeps case and punctuation, separates at whitespace | | Keyword | Keeps everything exactly intact | | French | Similar to English, but in French =-) | By toggling across all the different types of analyzers listed in the top bar, you will see what I call the basic golden rules of each one. We’ve discussed standard, whitespace, and English. The simple analyzer removes punctuation and lowercases and separates at non-letters. “Keyword” is the easiest for me to remember because everything needs to match exactly and returns a single token. Case, punctuation, everything. This is really helpful for when you expect a specific set of options—checkboxes in the application UI, for example.  With our golden rules in mind, select one the sample texts offered and see how they are transformed differently with each analyzer. We have a basic string, an email address, some html, and a French sentence. Try searching for particular terms across these text samples by using the input box. Do they produce a match? Trying our first sample text: **“As I was walking to work, I listened to two of Mike Lynn’s podcasts, and I dropped my keys.”** Notice by the yellow highlighting how the English analyzer allows you to recognize the stems “walk” and “listen,” the singular “podcast” and “key.”  However, none of those terms will match with any other analyzer: Parlez-vous français? Comment dit-on “stop word” en français? Email addresses can be a challenge. But now that you understand the rules for analyzers, try looking for  “mongodb” email addresses (or Gmail, Yahoo, “fill-in-the-corporate-blank.com”). I can match “mongodb” with the simple analyzer, but no other ones. ## Test your token knowledge on your own data Now that you have acquired some token knowledge of analyzers, test it on your own data on the Tokens page of atlassearchindexes.com. With our Analyzer Analyzer in place to help guide you, you can input your own sample text data in the input bar and hit submit ✅. Once that is done, input your search term and choose an analyzer to see if there is a result returned. Maybe you have some logging strings or UUIDs to try? Analyzers matter. If you aren’t getting the search results you expect, check the  analyzer used in your index definition. ## Create an Atlas Search index Armed with our deeper understanding of analyzers, we can take the next step in our search journey and create a search index in Atlas using different analyzers. I have a movie search engine application that uses the sample\_mflix.movies collection in Atlas, so let’s go to that collection in my Atlas UI, and then to the Search Indexes tab. > **Tip! You can download this sample data, as well as other sample datasets on all Atlas clusters, including the free tier.** We can create the search index using the Visual Editor. When creating the Atlas Search index, we can specify which analyzer to use. By default, Atlas Search uses the lucene.standard analyzer and maps every field dynamically. Mapping dynamically will automatically index all the fields of supported type. This is great if your schema evolves often or if you are experimenting with Atlas Search—but this takes up space. Some index configuration options—like autocomplete, synonyms, multi analyzers, and embedded documents—can lead to search indexes taking up a significant portion of your disk space, even more than the dataset itself. Although this is expected behavior, you might feel it with performance, especially with larger collections. If you are only searching across a few fields, I suggest you define your index to map only for those fields.  > Pro tip! To improve search query performance and save disk space, refine your index to: > * Map only the fields your application needs. > * Set the store option to false when specifying a string type in an index definition. You can also choose different analyzers for different fields—and you can even apply more than one analyzer to the same field. Pro tip! You can also use your own custom analyzer—but we’ll save custom analyzers for a different day. Click **Refine** to customize our index definition. I’ll turn off dynamic mapping and Add Field to map the title to standard analyzer. Then, add the fullplot field to map with the **english analyzer**. CREATE! And now, after just a few clicks, I  have a search index named ‘default’ which has stored in it the tokenized results of the standard analysis on the title field and the tokenized results of the lucene.english analyzer on the full plot field. It’s just that simple. And just like that, now I can use this index that took a minute to create to search these fields in my movies collection! 🎥🍿 ## Takeaways So, when configuring your search index: * Think about your data first. Knowing your data, how will you be querying it? What do you want your tokens to be? * Then, choose your analyzer accordingly. * Specify the best analyzer for your use case in your Atlas Search index definition. * Specify that index when writing your search query. You can create many different search indexes for your use case, but remember that you can only use one search index per search query. So, now that we have analyzed the analyzers, you know why picking the right analyzer matters. You can create the most efficient Atlas Search index for accurate results and optimal results. So go forth, search-warrior! Type in your application’s search box with confidence, not crossed fingers.
md
{ "tags": [ "Atlas" ], "pageDescription": "This is an in-depth explanation of various Atlas Search analyzers and indexes to help you build the best full-text search experience for your MongoDB application.", "contentType": "Tutorial" }
Analyzing Analyzers to Build the Right Search Index for Your App
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/rule-based-access-atlas-data-api
created
# Rule-Based Access to Atlas Data API MongoDB Atlas App Services have extensive serverless backend capabilities, such as Atlas Data API, that simply provide an endpoint for the read and write access in a specific cluster that you can customize access to later. You can enable authentication by using one of the authentication providers that are available in Atlas App Services. And, customize the data access on the collections, based on the rules that you can define with the App Services Rules. In this blog post, I’ll walk you through how you can expose the data of a collection through Atlas Data API to three users that are in three different groups with different permissions. ## Scenario * Dataset: We have a simple dataset that includes movies, and we will expose the movie data through Atlas Data API. * We have three users that are in three different groups. * Group 01 has access to all the fields on the **movie** collection in the **sample_mflix** database and all the movies available in the collection. * Group 02 has access only to the fields **title, fullplot, plot,** and **year** on the **movie** collection in the **sample_mflix** database and to all the movies available in the collection. * Group 03 has access only to the fields **title, fullplot, plot,** and **year** on the **movie** collection in the **sample_mflix** database and to the movies where the **year** field is greater than **2000**. Three users given in the scenario above will have the same HTTPS request, but they will receive a different result set based on the rules that are defined in App Services Rules. ## Prerequisites * Provision an Atlas cluster (even the tier M0 should be enough for the feature to be tested). * After you’ve provisioned the cluster, load the sample data set by following the steps. ## Steps to set up Here's how you can get started! ### Step 1: Create an App Services Application After you’ve created a cluster and loaded the sample dataset, you can create an application in App Services. Follow the steps to create a new App Services Application if you haven’t done so already. I used the name “APITestApplication” and chose the cluster “APITestCluster” that I’ve already loaded the sample dataset into. ### Step 2: Enable Atlas Data API After you’ve created the App Services application, navigate to the **HTTPS Endpoints** on the left side menu and click the **Data API** tab, as shown below. Hit the button **Enable the Data API**. After that, you will see that Data API has been enabled. Scroll down on the page and find the **User Settings**. Enable **Create User Upon Authentication**. **Save** it and then **Deploy** it. Now, your API endpoint is ready and accessible. But if you test it, you will get the following authentication error, since no authentication provider has been enabled. ```bash curl --location --request POST 'https://ap-south-1.aws.data.mongodb-api.com/app/apitestapplication-ckecj/endpoint/data/v1/action/find' \ > --header 'Content-Type: application/json' \ > --data-raw '{ > "dataSource": "mongodb-atlas", > "database": "sample_mflix", > "collection": "movies", > "limit": 5 > }' {"error":"no authentication methods were specified","error_code":"InvalidParameter","link":"https://realm.mongodb.com/groups/5ca48430014b76f34448bbcf/apps/63a8bb695e56d7c41ab77da6/logs?co_id=63a8be8c0b3a0268511a7525"} ``` ### Step 3.1: Enable JWT-based authentication Navigate to the homepage of the App Services application. Click **Authentication** on the left-hand side menu and click the **EDIT** button of the row where the provider is **Custom JWT Authentication**. JWT (JSON Web Token) provides a token-based authentication where a token is generated by the client based on an agreed secret and cryptography algorithm. After the client transmits the token, the server validates the token with the agreed secret and cryptography algorithm and then processes client requests if the token is valid. In the configuration options of the Custom JWT Authentication, fill out the options with the following: * Enable the Authentication Provider (**Provider Enabled** must be turned on). * Keep the verification method as is (**Manually specify signing keys**). * Keep the signing algorithm as is (**HS256**). * Add a new signing key. * Provide the signing key name. * For example, **APITestJWTSigningKEY**. * Provide the secure key content (between 32 and 512 characters) and note it somewhere secure. * For example, **FipTEgYJ6WfUEhCJq3e@pm8-TkE9*UZN**. * Add two fields in the metadata fields. * The path should be **metadata.group** and the corresponding field should be **group**. * The path should be **metadata.name** and the corresponding field should be **name**. * Keep the audience field as is (empty). Below, you can find how the JWT Authentication Provider form has been filled accordingly. **Save** it and then **Deploy** it. After it’s deployed, you can see the secret that has been created in the App Services Values, that can be accessible on the left side menu by clicking **Values**. ### Step 3.2: Test JWT authentication Now, we need an encoded JWT to pass it to App Services Data API to authenticate and consequently access the underlying data. You can have a separate external authentication service that can provide a signed JWT that you can use in App Services Authentication. However, for the sake of simplicity, we’ll generate our own fake JWTs through jwt.io. These are the steps to generate an encoded JWT: * Visit jwt.io. * On the right-hand side in the section **Decoded**, we can fill out the values. On the left-hand side, the corresponding **Encoded** JWT will be generated. * In the **Decoded** section: * Keep the **Header** section same. * In the **Payload** section, set the following fields: * Sub. * Represents owner of the token. * Provide value unique to the user. * Metadata. * Represents metadata information regarding this token and can be used for further processing in App Services. * We have two sub fields here. * Name. * Represents the username of the client that will initiate the API request. * This information will be used as the username in App Services. * Group. * Represents the group information of the client that we’ll use later for rule-based access. * Exp. * Represents when the token is going to expire. * Provide a future time to keep expiration impossible during our tests. * Aud. * Represents the name of the App Services Application that you can get from the homepage of your application in App Services. * In the **Verify Signature** section: * Provide the same secret that you’ve already provided while enabling Custom JWT Authentication in the Step 3.1. Below, you can find how the values have been filled out in the **Decoded** section and the corresponding **Encoded** JWT that has been generated. Copy the generated **JWT** from the **Encoded** section and pass it to the header section of the HTTP request, as shown below. ```bash curl --location --request POST 'https://ap-south-1.aws.data.mongodb-api.com/app/apitestapplication-ckecj/endpoint/data/v1/action/find' --header 'jwtTokenString: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIwMDEiLCJtZXRhZGF0YSI6eyJuYW1lIjoidXNlcjAxIiwiZ3JvdXAiOiJncm91cDAxIn0sImV4cCI6MTg5NjIzOTAyMiwiYXVkIjoiYXBpdGVzdGFwcGxpY2F0aW9uLWNrZWNqIn0.cq5Dr5fJ-BD1mBJia697oWVg_yWPua_NT5roUlxihYE' --header 'Content-Type: application/json' --data-raw '{ "dataSource": "mongodb-atlas", "database": "sample_mflix", "collection": "movies", "limit": 5 }' "Failed to find documents: FunctionError: no rule exists for namespace 'sample_mflix.movies" ``` We get the following error: “**no rule exists for namespace**.” Basically, we were able to authenticate to the application. However, since there were no App Services Rules defined, we were not able to access any data. Even though the request is not successful due to the no rule definition, you can check out the App Users page to list authenticated users as shown below. **user01** was the name of the user that was provided in the **metadata.name** field of the JWT. ### Step 4.1: Create a Role in App Services Rules So far, we have enabled Atlas Data API and Custom JWT Authentication, and we were able to authenticate with the username **user01** who is in the group **group01**. These two metadata information (user and group) were filled in the **metadata** field of the JWT. Remember the payload of the JWT: ```json { "sub": "001", "metadata": { "name": "user01", "group": "group01" }, "exp": 1896239022, "aud" : "apitestapplication-ckecj" } ``` Now, based on the **metadata.group** field value, we will show filtered or unfiltered movie data. Let’s remember the rules that we described in the Scenario: * We have three users that are in three different groups. * Group 01 has access to all the fields on the **movie** collection in the **sample_mflix** database and all the movies available in the collection. * Group 02 has access only to the fields **title**, **fullplot**, **plot**, and **year** on the **movie** collection in the **sample_mflix** database and to all the movies available in the collection. * Group 03 has access only to the fields **title**, **fullplot**, **plot**, and **year** on the **movie** collection in the **sample_mflix** database and to the movies where the **year** field is greater than **2000**. Let’s create a role that will have access to all of the fields. This role will be for the users that are in Group 01. * Navigate the **Rules** section on the left-hand side of the menu in App Services. * Choose the collection **sample_mflix.movies** on the left side of the menu. * Click **Skip** (**Start from Scratch**) on the right side of the menu, as shown below. **Role name**: Give it a proper role name. We will use **fullReadAccess** as the name for this role. **Apply when**: Evaluation criteria of this role. In other words, it represents when this role is evaluated. Provide the condition accordingly. **%%user.data.group** matches the **metadata.group** information that is represented in JWT. We’ve configured this mapping in Step 3.1. **Document Permissions**: Allowed activities for this role. **Field Permissions**: Allowed fields to be read/write for this role. You can see below how it was filled out accordingly. After you’ve saved and deployed it, we can test the curl command again, as shown below: ``` curl --location --request POST 'https://ap-south-1.aws.data.mongodb-api.com/app/apitestapplication-ckecj/endpoint/data/v1/action/find' \ > --header 'jwtTokenString: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIwMDEiLCJtZXRhZGF0YSI6eyJuYW1lIjoidXNlcjAxIiwiZ3JvdXAiOiJncm91cDAxIn0sImV4cCI6MTg5NjIzOTAyMiwiYXVkIjoiYXBpdGVzdGFwcGxpY2F0aW9uLWNrZWNqIn0.cq5Dr5fJ-BD1mBJia697oWVg_yWPua_NT5roUlxihYE' \ > --header 'Content-Type: application/json' \ > --data-raw '{ > "dataSource": "mongodb-atlas", > "database": "sample_mflix", > "collection": "movies", > "limit": 5 > }' | python -m json.tool % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6192 0 6087 100 105 8072 139 --:--:-- --:--:-- --:--:-- 8245 { "documents": { "_id": "573a1390f29313caabcd4135", "plot": "Three men hammer on an anvil and pass a bottle of beer around.", "genres": [ "Short" ], "runtime": 1, "cast": [ "Charles Kayser", "John Ott" ], "num_mflix_comments": 0, ``` Now the execution of the HTTPS request is successful. It returns five records with all the available fields in the documents. ### Step 4.2: Create another role in App Services Rules Now we’ll add another role that only has access to four fields (**title**, **fullplot**, **plot**, and **year**) on the collection **sample_mflix.movies**. It is similar to what we’ve created in [Step 4.1, but now we’ve defined which fields are accessible to this role, as shown below. **Save** it and **Deploy** it. Create another JWT for the user **user02** that is in **group02**, as shown below. Pass the generated Encoded JWT to the curl command: ``` curl --location --request POST 'https://ap-south-1.aws.data.mongodb-api.com/app/apitestapplication-ckecj/endpoint/data/v1/action/find' \ > --header 'jwtTokenString: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIwMDIiLCJtZXRhZGF0YSI6eyJuYW1lIjoidXNlcjAyIiwiZ3JvdXAiOiJncm91cDAyIn0sImV4cCI6MTg5NjIzOTAyMiwiYXVkIjoiYXBpdGVzdGFwcGxpY2F0aW9uLWNrZWNqIn0.llfSR9rLSoSTb3LGwENcgYvKeIu3XZugYbHIbqI29nk' \ > --header 'Content-Type: application/json' \ > --data-raw '{ > "dataSource": "mongodb-atlas", > "database": "sample_mflix", > "collection": "movies", > "limit": 5 > }' | python -m json.tool % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3022 0 2917 100 105 3363 121 --:--:-- --:--:-- --:--:-- 3501 { "documents": { "_id": "573a1390f29313caabcd4135", "plot": "Three men hammer on an anvil and pass a bottle of beer around.", "title": "Blacksmith Scene", "fullplot": "A stationary camera looks at a large anvil with a blacksmith behind it and one on either side. The smith in the middle draws a heated metal rod from the fire, places it on the anvil, and all three begin a rhythmic hammering. After several blows, the metal goes back in the fire. One smith pulls out a bottle of beer, and they each take a swig. Then, out comes the glowing metal and the hammering resumes.", "year": 1893 }, { "_id": "573a1390f29313caabcd42e8", "plot": "A group of bandits stage a brazen train hold-up, only to find a determined posse hot on their heels.", "title": "The Great Train Robbery", "fullplot": "Among the earliest existing films in American cinema - notable as the first film that presented a narrative story to tell - it depicts a group of cowboy outlaws who hold up a train and rob the passengers. They are then pursued by a Sheriff's posse. Several scenes have color included - all hand tinted.", "year": 1903 }, … ``` Now the user in **group02** has access to only the four fields (**title**, **plot**, **fullplot**, and **year**), in addition to the **_id** field, as we configured in the role definition of a rule in App Services Rules. ### Step 4.3: Updating a role and a creating a filter in App Services Rules Now we’ll update the existing role that we’ve created in [Step 4.2 by including **group03** to be evaluated, and we will add a filter that restricts access to only the movies where the **year** field is greater than 2000. Update the role (include **group03** in addition to **group02**) that you created in Step 4.2 as shown below. Now, users that are in **group03** can authenticate and project only the four fields rather than all the available fields. But how can we put a restriction on the filtering based on the value of the **year** field? We need to add a filter. Navigate to the **Filters** tab in the **Rules** page of the App Services after you choose the **sample_mflix.movies** collection. Provide the following inputs for the **Filter**: After you’ve saved and deployed it, create a new JWT for the user **user03** that is in the group **group03**, as shown below: Copy the encoded JWT and pass it to the curl command, as shown below: ``` curl --location --request POST 'https://ap-south-1.aws.data.mongodb-api.com/app/apitestapplication-ckecj/endpoint/data/v1/action/find' \ > --header 'jwtTokenString: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIwMDMiLCJtZXRhZGF0YSI6eyJuYW1lIjoidXNlcjAzIiwiZ3JvdXAiOiJncm91cDAzIn0sImV4cCI6MTg5NjIzOTAyMiwiYXVkIjoiYXBpdGVzdGFwcGxpY2F0aW9uLWNrZWNqIn0._H5rScXP9xymF7mCDj6m9So1-3qylArHTH_dxqlndwU' \ > --header 'Content-Type: application/json' \ > --data-raw '{ > "dataSource": "mongodb-atlas", > "database": "sample_mflix", > "collection": "movies", > "limit": 5 > }' | python -m json.tool % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4008 0 3903 100 105 6282 169 --:--:-- --:--:-- --:--:-- 6485 { "documents": { "_id": "573a1393f29313caabcdcb42", "plot": "Kate and her actor brother live in N.Y. in the 21st Century. Her ex-boyfriend, Stuart, lives above her apartment. Stuart finds a space near the Brooklyn Bridge where there is a gap in time....", "title": "Kate & Leopold", "fullplot": "Kate and her actor brother live in N.Y. in the 21st Century. Her ex-boyfriend, Stuart, lives above her apartment. Stuart finds a space near the Brooklyn Bridge where there is a gap in time. He goes back to the 19th Century and takes pictures of the place. Leopold -- a man living in the 1870s -- is puzzled by Stuart's tiny camera, follows him back through the gap, and they both ended up in the present day. Leopold is clueless about his new surroundings. He gets help and insight from Charlie who thinks that Leopold is an actor who is always in character. Leopold is a highly intelligent man and tries his best to learn and even improve the modern conveniences that he encounters.", "year": 2001 }, { "_id": "573a1398f29313caabceb1fe", "plot": "A modern day adaptation of Dostoyevsky's classic novel about a young student who is forever haunted by the murder he has committed.", "title": "Crime and Punishment", "fullplot": "A modern day adaptation of Dostoyevsky's classic novel about a young student who is forever haunted by the murder he has committed.", "year": 2002 }, … ``` Now, **group03** members will receive the movies where the **year** information is greater than 2000, along with only the four fields (**title**, **plot**, **fullplot**, and **year**), in addition to the **_id** field. ## Summary MongoDB Atlas App Services provides extensive functionalities to build your back end in a serverless manner. In this blog post, we’ve discussed: * How we can enable Custom JWT Authentication. * How we can map custom content of a JWT to the data that can be consumed in App Services — for example, managing usernames and the groups of users. * How we can restrict data access for the users who have different permissions. * We’ve created the following in App Services Rules: * Two roles to specify read access on all the fields and only the four fields. * One filter to exclude the movies where the year field is not greater than 2000. Can you add a call-to-action? Maybe directing people to our developer forums? Give it a free try! Provision an M0 Atlas instance and create a new App Services Application. If you are stuck, let us help you in the [developer forums.
md
{ "tags": [ "Atlas" ], "pageDescription": "Atlas Data API provides a serverless API layer on top of your data in Atlas. You can natively configure rule-based access for a set of users with different permissions.", "contentType": "Tutorial" }
Rule-Based Access to Atlas Data API
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/elt-mongodb-data-airbyte
created
# ELT MongoDB Data Using Airbyte Airbyte is an open source data integration platform that provides an easy and quick way to ELT (Extract, Load, and Transform) your data between a plethora of data sources. AirByte can be used as part of a workflow orchestration solution like Apache Airflow to address data movement. In this post, we will install Airbyte and replicate the sample database, “sample\_restaurants,” found in MongoDB Atlas out to a CSV file. ## Getting started Airbyte is available as a cloud service or can be installed self-hosted using Docker containers. In this post, we will deploy Airbyte locally using Docker. ``` git clone https://github.com/airbytehq/airbyte.git cd airbyte docker-compose up ``` When the containers are ready, you will see the logo printed in the compose logs as follows: Navigate to http://localhost:8000 to launch the Airbyte portal. Note that the default username is “admin” and the password is “password.” ## Creating a connection To create a source connector, click on the Sources menu item on the left side of the portal and then the “Connect to your first source” button. This will launch the New Source page as follows: Type “mongodb” and select “MongoDb.” The MongoDB Connector can be used with both self-hosted and MongoDB Atlas clusters. Select the appropriate MongoDB instance type and fill out the rest of the configuration information. In this post, we will be using MongoDB Atlas and have set our configuration as follows: | | | | --- | --- | | MongoDB Instance Type | MongoDB Atlas | | Cluster URL | demo.ikyil.mongodb.net | | Database Name | sample_restaurants | | Username | ab_user | | Password | ********** | | Authentication Source | admin | Note: If you’re using MongoDB Atlas, be sure to create the user and allow network access. By default, MongoDB Atlas does not access remote connections. Click “Setup source” and Airbyte will test the connection. If it’s successful, you’ll be sent to the Add destination page. Click the “Add destination” button and select “Local CSV” from the drop-down. Next, provide a destination name, “restaurant-samples,” and destination path, “/local.” The Airbyte portal provides a setup guide for the Local CSV connector on the right side of the page. This is useful for a quick reference on connector configuration. Click “Set up destination” and Airbyte will test the connection with the destination. Upon success, you’ll be redirected to a page where you can define the details of the stream you’d like to sync. Airbyte provides a variety of sync options, including full refresh and incremental. Select “Full Refresh | Overwrite” and then click “Set up sync.” Airbyte will kick off the sync process and if successful, you’ll see the Sync Succeeded message. ## Exploring the data Let’s take a look at the CSV files created. The CSV connector writes to the /local docker mount on the airbyte server. By default, this mount is defined as /tmp/airbyte_local and can be changed by defining the LOCAL_ROOT docker environment variable. To view the CSV files, launch bash from the docker exec command as follows: **docker exec -it airbyte-server bash** Once connected, navigate to the /local folder and view the CSV files: bash-4.2# **cd /tmp/airbyte_local/** bash-4.2# **ls** _airbyte_raw_neighborhoods.csv _airbyte_raw_restaurants.csv ## Summary In today’s data-rich world, building data pipelines to collect and transform heterogeneous data is an essential part of many business processes. Whether the goal is deriving business insights through analytics or creating a single view of the customer, Airbyte makes it easy to move data between MongoDB and many other data sources.
md
{ "tags": [ "Atlas" ], "pageDescription": "Learn how to extract load and transform MongoDB data using Airbyte.", "contentType": "Tutorial" }
ELT MongoDB Data Using Airbyte
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/csharp/build-first-dotnet-core-application-mongodb-atlas
created
# Build Your First .NET Core Application with MongoDB Atlas So you're a .NET Core developer or you're trying to become one and you'd like to get a database included into the mix. MongoDB is a great choice and is quite easy to get started with for your .NET Core projects. In this tutorial, we're going to explore simple CRUD operations in a .NET Core application, something that will make you feel comfortable in no time! ## The Requirements To be successful with this tutorial, you'll need to have a few things ready to go. - .NET Core installed and configured. - MongoDB Atlas cluster, M0 or better, deployed and configured. Both are out of the scope of this particular tutorial, but you can refer to this tutorial for more specific instructions around MongoDB Atlas deployments. You can validate that .NET Core is ready to go by executing the following command: ```bash dotnet new console --output MongoExample ``` We're going to be building a console application, but we'll explore API development in a later tutorial. The "MongoExample" project is what we'll use for the remainder of this tutorial. ## Installing and Configuring the MongoDB Driver for .NET Core Development When building C# applications, the common package manager to use is NuGet, something that is readily available in Visual Studio. If you're using Visual Studio, you can add the following: ```bash Install-Package MongoDB.Driver -Version 2.14.1 ``` However, I'm on a Mac, use a variety of programming languages, and have chosen Visual Studio Code to be the IDE for me. There is no official NuGet extension for Visual Studio Code, but that doesn't mean we're stuck. Execute the following from a CLI while within your project directory: ```bash dotnet add package MongoDB.Driver ``` The above command will add an entry to your project's "MongoExample.csproj" file and download the dependencies that we need. This is valuable whether you're using Visual Studio Code or not. If you generated the .NET Core project with the CLI like I did, you'll have a "Program.cs" file to work with. Open it and add the following code: ```csharp using MongoDB.Driver; using MongoDB.Bson; MongoClient client = new MongoClient("ATLAS_URI_HERE"); List databases = client.ListDatabaseNames().ToList(); foreach(string database in databases) { Console.WriteLine(database); } ``` The above code will connect to a MongoDB Atlas cluster and then print out the names of the databases that the particular user has access to. The printing of databases is optional, but it could be a good way to make sure everything is working correctly. If you're wondering where to get your `ATLAS_URI_HERE` string, you can find it in your MongoDB Atlas dashboard and by clicking the connect button on your cluster. The above image should help when looking for the Atlas URI. ## Building a POCO Class for the MongoDB Document Model When using .NET Core to work with MongoDB documents, you can make use of the `BsonDocument` class, but depending on what you're trying to do, it could complicate your .NET Core application. Instead, I like to work with classes that are directly mapped to document fields. This allows me to use the class naturally in C#, but know that everything will work out on its own for MongoDB documents. Create a "playlist.cs" file within your project and include the following C# code: ```csharp using MongoDB.Bson; public class Playlist { public ObjectId _id { get; set; } public string username { get; set; } = null!; public List items { get; set; } = null!; public Playlist(string username, List movieIds) { this.username = username; this.items = movieIds; } } ``` In the above `Playlist` class, we have three fields. If you want each of those fields to map perfectly to a field in a MongoDB document, you don't have to do anything further. To be clear, the above class would map to a document that looks like the following: ```json { "_id": ObjectId("61d8bb5e2d5fe0c2b8a1007d"), "username": "nraboy", "items": "1234", "5678" ] } ``` However, if you wanted your C# class field to be different than the field it should map to in a MongoDB document, you'd have to make a slight change. The `Playlist` class would look something like this: ```csharp using MongoDB.Bson; using MongoDB.Bson.Serialization.Attributes; public class Playlist { public ObjectId _id { get; set; } [BsonElement("username")] public string user { get; set; } = null!; public List items { get; set; } = null!; public Playlist(string username, List movieIds) { this.user = username; this.items = movieIds; } } ``` Notice the new import and the use of `BsonElement` to map a remote document field to a local .NET Core class field. There are a lot of other things you can do in terms of document mapping, but they are out of the scope of this particular tutorial. If you're curious about other mapping techniques, check out the [documentation on the subject. ## Implementing Basic CRUD in .NET Core with MongoDB Since we're able to connect to Atlas from our .NET Core application and we have some understanding of what our data model will look like for the rest of the example, we can now work towards creating, reading, updating, and deleting (CRUD) documents. We'll start by creating some data. Within the project's "Program.cs" file, make it look like the following: ```csharp using MongoDB.Driver; MongoClient client = new MongoClient("ATLAS_URI_HERE"); var playlistCollection = client.GetDatabase("sample_mflix").GetCollection("playlist"); List movieList = new List(); movieList.Add("1234"); playlistCollection.InsertOne(new Playlist("nraboy", movieList)); ``` In the above example, we're connecting to MongoDB Atlas, getting a reference to our "playlist" collection while noting that it is related to our `Playlist` class, and then making use of the `InsertOne` function on the collection. If you ran the above code, you should see a new document in your collection with matching information. So let's read from that collection using our C# code: ```csharp // Previous code here ... FilterDefinition filter = Builders.Filter.Eq("username", "nraboy"); List results = playlistCollection.Find(filter).ToList(); foreach(Playlist result in results) { Console.WriteLine(string.Join(", ", result.items)); } ``` In the above code, we are creating a new `FilterDefinition` filter to determine which data we want returned from our `Find` operation. In particular, our filter will give us all documents that have "nraboy" as the `username` field, which may be more than one because we never specified if the field should be unique. Using the filter, we can do a `Find` on the collection and convert it to a `List` of our `Playlist` class. If you don't want to use a `List`, you can work with your data using a cursor. You can learn more about cursors in the documentation. With a `Find` out of the way, let's move onto updating our documents within MongoDB. We're going to add to our "Program.cs" file with the following code: ```csharp // Previous code here ... FilterDefinition filter = Builders.Filter.Eq("username", "nraboy"); // Previous code here ... UpdateDefinition update = Builders.Update.AddToSet("items", "5678"); playlistCollection.UpdateOne(filter, update); results = playlistCollection.Find(filter).ToList(); foreach(Playlist result in results) { Console.WriteLine(string.Join(", ", result.items)); } ``` In the above code, we are creating two definitions, one being the `FilterDefinition` that we had created in the previous step. We're going to keep the same filter, but we're adding a definition of what should be updated when there was a match based on the filter. To clear things up, we're going to match on all documents where "nraboy" is the `username` field. When matched, we want to add "5678" to the `items` array within our document. Using both definitions, we can use the `UpdateOne` method to make it happen. There are more update operations than just the `AddToSet` function. It is worth checking out the documentation to see what you can accomplish. This brings us to our final basic CRUD operation. We're going to delete the document that we've been working with. Within the "Program.cs" file, add the following C# code: ```csharp // Previous code here ... FilterDefinition filter = Builders.Filter.Eq("username", "nraboy"); // Previous code here ... playlistCollection.DeleteOne(filter); ``` We're going to make use of the same filter we've been using, but this time in the `DeleteOne` function. While we could have more than one document returned from our filter, the `DeleteOne` function will only delete the first one. You can make use of the `DeleteMany` function if you want to delete all of them. Need to see it all together? Check this out: ```csharp using MongoDB.Driver; MongoClient client = new MongoClient("ATLAS_URI_HERE"); var playlistCollection = client.GetDatabase("sample_mflix").GetCollection("playlist"); List movieList = new List(); movieList.Add("1234"); playlistCollection.InsertOne(new Playlist("nraboy", movieList)); FilterDefinition filter = Builders.Filter.Eq("username", "nraboy"); List results = playlistCollection.Find(filter).ToList(); foreach(Playlist result in results) { Console.WriteLine(string.Join(", ", result.items)); } UpdateDefinition update = Builders.Update.AddToSet("items", "5678"); playlistCollection.UpdateOne(filter, update); results = playlistCollection.Find(filter).ToList(); foreach(Playlist result in results) { Console.WriteLine(string.Join(", ", result.items)); } playlistCollection.DeleteOne(filter); ``` The above code is everything that we did. If you swapped out the Atlas URI string with your own, it would create a document, read from it, update it, and then finally delete it. ## Conclusion You just saw how to quickly get up and running with MongoDB in your .NET Core application! While we only brushed upon the surface of what is possible in terms of MongoDB, it should put you on a better path for accomplishing your project needs. If you're looking for more help, check out the MongoDB Community Forums and get involved.
md
{ "tags": [ "C#", ".NET" ], "pageDescription": "Learn how to quickly and easily start building .NET Core applications that interact with MongoDB Atlas for create, read, update, and delete (CRUD) operations.", "contentType": "Quickstart" }
Build Your First .NET Core Application with MongoDB Atlas
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/javascript/real-time-tracking-change-streams-socketio
created
# Real-Time Location Tracking with Change Streams and Socket.io In this article, you will learn how to use MongoDB Change Streams and Socket.io to build a real-time location tracking application. To demonstrate this, we will build a local package delivery service. Change streams are used to detect document updates, such as location and shipment status, and Socket.io is used to broadcast these updates to the connected clients. An Express.js server will run in the background to create and maintain the websockets. This article will highlight the important pieces of this demo project, but you can find the full code, along with setup instructions, on Github. ## Connect Express to MongoDB Atlas Connecting Express.js to MongoDB requires the use of the MongoDB driver, which can be installed as an npm package. For this project I have used MongoDB Atlas and utilized the free tier to create a cluster. You can create your own free cluster and generate the connection string from the Atlas dashboard. I have implemented a singleton pattern for connecting with MongoDB to maintain a single connection across the application. The code defines a singleton `db` variable that stores the MongoClient instance after the first successful connection to the MongoDB database.The `dbConnect()` is an asynchronous function that returns the MongoClient instance. It first checks if the db variable has already been initialized and returns it if it has. Otherwise, it will create a new MongoClient instance and return it. `dbConnect` function is exported as the default export, allowing other modules to use it. ```typescript // dbClient.ts import { MongoClient } from 'mongodb'; const uri = process.env.MONGODB_CONNECTION_STRING; let db: MongoClient; const dbConnect = async (): Promise => {     try {         if (db) {             return db;         }         console.log('Connecting to MongoDB...');         const client = new MongoClient(uri);         await client.connect();         db = client;         console.log('Connected to db');         return db;     } catch (error) {         console.error('Error connecting to MongoDB', error);         throw error;     } }; export default dbConnect; ``` Now we can call the dbConnect function in the `server.ts` file or any other file that serves as the entry point for your application. ```typescript // server.ts import dbClient from './dbClient'; server.listen(5000, async () => {     try {         await dbClient();     } catch (error) {         console.error(error);     } }); ``` We now have our Express server connected to MongoDB. With the basic setup in place, we can proceed to incorporating change streams and socket.io into our application. ## Change Streams MongoDB Change Streams is a powerful feature that allows you to listen for changes in your MongoDB collections in real-time. Change streams provide a change notification-like mechanism that allows you to be notified of any changes to your data as they happen. To use change streams, you need to use the `watch()` function from the MongoDB driver. Here is a simple example of how you would use change streams on a collection. ```typescript const changeStream = collection.watch() changeStream.on('change', (event) => { // your logic }) ``` The callback function will run every time a document gets added, deleted, or updated in the watched collection. ## Socket.IO and Socket.IO rooms Socket.IO is a popular JavaScript library. It enables real-time communication between the server and client, making it ideal for applications that require live updates and data streaming. In our application, it is used to broadcast location and shipment status updates to the connected clients in real-time. One of the key features of Socket.IO is the ability to create "rooms." Rooms are a way to segment connections and allow you to broadcast messages to specific groups of clients. In our application, rooms are used to ensure that location and shipment status updates are only broadcasted to the clients that are tracking that specific package or driver. The code to include Socket.IO and its handlers can be found inside the files `src/server.ts` and `src/socketHandler.ts` We are defining all the Socket.IO events inside the `socketHandler.ts` file so the socket-related code is separated from the rest of the application. Below is an example to implement the basic connect and disconnect Socket.IO events in Node.js. ```typescript // socketHandler.ts const socketHandler = (io: Server) => {   io.on('connection', (socket: any) => {     console.log('A user connected');     socket.on('disconnect', () => {       console.log('A user disconnected');     });   }); }; export default socketHandler; ``` We can now integrate the socketHandler function into our `server.ts file` (the starting point of our application) by importing it and calling it once the server begins listening. ```typescript // server.ts import app from './app'; // Express app import http from 'http'; import { Server } from 'socket.io'; const server = http.createServer(app); const io = new Server(server); server.listen(5000, async () => {   try {     socketHandler(io);   } catch (error) {     console.error(error);   } }); ``` We now have the Socket.IO setup with our Express app. In the next section, we will see how location data gets stored and updated. ## Storing location data MongoDB has built-in support for storing location data as GeoJSON, which allows for efficient querying and indexing of spatial data. In our application, the driver's location is stored in MongoDB as a GeoJSON point. To simulate the driver movement, in the front end, there's an option to log in as driver and move the driver marker across the map, simulating the driver's location. (More on that covered in the front end section.) When the driver moves, a socket event is triggered which sends the updated location to the server, which is then updated in the database. ```typescript socket.on("UPDATE_DA_LOCATION", async (data) => {   const { email, location } = data;   await collection.findOneAndUpdate({ email }, { $set: { currentLocation: location } }); }); ``` The code above handles the "UPDATE_DA_LOCATION" socket event. It takes in the email and location data from the socket message and updates the corresponding driver's current location in the MongoDB database. So far, we've covered how to set up an Express server and connect it to MongoDB. We also saw how to set up Socket.IO and listen for updates. In the next section, we will cover how to use change streams and emit a socket event from server to front end. ## Using change streams to read updates This is the center point of discussion in this article. When a new delivery is requested from the UI, a shipment entry is created in DB. The shipment will be in pending state until a driver accepts the shipment. Once the driver accepts the shipment, a socket room is created with the driver id as the room name, and the user who created the shipment is subscribed to that room. Here's a simple diagram to help you better visualize the flow. With the user subscribed to the socket room, all we need to do is to listen to the changes in the driver's location. This is where the change stream comes into picture. We have a change stream in place, which is listening to the Delivery Associate (Driver) collection. Whenever there is an update in the collection, this will be triggered. We will use this callback function to execute our business logic. Note we are passing an option to the change stream watch function `{ fullDocument: 'updateLookup' }`. It specifies that the complete updated document should be included in the change event, rather than just the delta or the changes made to the document. ```typescript const watcher = async (io: Server) => {\   const collection = await DeliveryAssociateCollection();\   const changeStream = collection.watch(], { fullDocument: 'updateLookup' });\   changeStream.on('change', (event) => {\     if (event.operationType === 'update') {\         const fullDocument = event.fullDocument;\         io.to(String(fullDocument._id)).emit("DA_LOCATION_CHANGED", fullDocument);\ }});}; ``` In the above code, we are listening to all CRUD operations in the Delivery Associate (Driver) collection and we emit socket events only for update operations. Since the room names are just driver ids, we can get the driver id from the updated document. This way, we are able to listen to changes in driver location using change streams and send it to the user.  In the codebase, all the change stream code for the application will be inside the folder `src/watchers/`. You can specify the watchers wherever you desire but to keep code clean, I'm following this approach. The below code shows how the watcher function is executed in the entry point of the application --- i.e., server.ts file. ```typescript // server.ts import deliveryAssociateWatchers from './watchers/deliveryAssociates'; server.listen(5000, async () => {   try { await dbClient(); socketHandler(io); await deliveryAssociateWatchers(io);   } catch (error) {     console.error(error);   } }); ``` In this section, we saw how change streams are used to monitor updates in the Delivery Associate (Driver) collection. We also saw how the `fullDocument` option in the watcher function was used to retrieve the complete updated document, which then allowed us to send the updated location data to the subscribed user through sockets. The next section focuses on exploring the front-end codebase and how the emitted data is used to update the map in real time. ## Front end I won't go into much detail on the front end but just to give you an overview, it's built on React and uses Leaflet.js for Map. I have included the entire front end as a sub app in the GitHub repo under the folder [`/frontend`. The Readme contains the steps on how to install and start the app. Starting the front end gives two options:  1\. Log in as user.2. Log in as a driver. Use the "log in as driver" option to simulate the driver's location. This can be done by simply dragging the marker across the map. ### Driver simulator Logging in as driver will let you simulate the driver's location. The code snippet provided demonstrates the use of `useState`and `useEffect` hooks to simulate a driver's location updates. The `` and `` are Leaflet components. One is the actual map we see on the UI and other is, as the name suggests, a marker which is movable using our mouse. ```jsx // Driver Simulator const position, setPosition] = useState(initProps.position); const gpsUpdate = (position) => {   const data = {     email,     location: { type: 'Point', coordinates: [position.lng, position.lat] },   };   socket.emit("UPDATE_DA_LOCATION", data); }; useEffect(() => { gpsUpdate(position); }, [position]); return ( ) ``` The **position** state is initialized with the initial props. When the draggable marker is moved, the position gets updated. This triggers the gpsUpdate function inside its useEffect hook, which sends a socket event to update the driver's location. ### User app On the user app side, when a new shipment is created and a delivery associate is assigned, the `SHIPMENT_UPDATED` socket event is triggered. In response, the user app emits the `SUBSCRIBE_TO_DA` event to subscribe to the driver's socket room. (DA is short for Delivery Associate.) ```js socket.on('SHIPMENT_UPDATED', (data) => {   if (data.deliveryAssociateId) {     const deliveryAssociateId = data.deliveryAssociateId;     socket.emit('SUBSCRIBE_TO_DA', { deliveryAssociateId });   } }); ``` Once subscribed, any changes to the driver's location will trigger the DA_LOCATION_CHANGED socket event. The `driverPosition` state represents the delivery driver's current position. This gets updated every time new data is received from the socket event. ```jsx const [driverPosition, setDriverPosition] = useState(initProps.position); socket.on('DA_LOCATION_CHANGED', (data) => {   const location = data.location;   setDriverPosition(location); }); return ( ) ``` The code demonstrates how the user app updates the driver's marker position on the map in real time using socket events. The state driverPosition is passed to the component and updated with the latest location data from the DA_LOCATION_CHANGED socket event. ## Summary In this article, we saw how MongoDB Change Streams and Socket.IO can be used in a Node.js Express application to develop a real-time system. We learned about how to monitor a MongoDB collection using the change stream watcher method. We also learned how Socket.IO rooms can be used to segment socket connections for broadcasting updates. We also saw a little front-end code on how props are manipulated with socket events. If you wish to learn more about Change Streams, check out our tutorial on [Change Streams and triggers with Node.js, or the video version of it. For a more in-depth tutorial on how to use Change Streams directly in your React application, you can also check out this tutorial on real-time data in a React JavaScript front end.
md
{ "tags": [ "JavaScript", "MongoDB", "Node.js" ], "pageDescription": "In this article, you will learn how to use MongoDB Change Streams and Socket.io to build a real-time location tracking application. To demonstrate this, we will build a local package delivery service.\n", "contentType": "Tutorial" }
Real-Time Location Tracking with Change Streams and Socket.io
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/csharp/transactions-csharp-dotnet
created
# Working with MongoDB Transactions with C# and the .NET Framework >Update 10/2019: This article's code example has been updated to include the required handing of the session handle to database methods. C# applications connected to a MongoDB database use the MongoDB .NET driver. To add the .NET driver to your Visual Studio Application, in the NuGet Package Manager, search for "MongoDB". Make sure you choose the latest version (>=2.7) of the driver, and press *Install*. Prior to MongoDB version 4.0, MongoDB was transactionally consistent at the document level. These existing atomic single-document operations provide the transaction semantics to meet the data integrity needs of the majority of applications. This is because the flexibility of the document model allows developers to easily embed related data for an entity as arrays and sub-documents within a single, rich document. That said, there are some cases where splitting the content into two or more collections would be appropriate, and for these cases, multi-document ACID transactions makes it easier than ever for developers to address the full spectrum of use cases with MongoDB. For a deeper discussion on MongoDB document model design, including how to represent one-to-many and many-to-many relationships, check out this article on data model design. In the following code we will create a Product object and perform a MongoDB transaction that will insert some sample data into MongoDB then update the prices for all products by 10%. ``` csp using MongoDB.Bson; using MongoDB.Bson.Serialization.Attributes; using MongoDB.Driver; using System; using System.Threading.Tasks; namespace MongoDBTransaction { public static class Program { public class Product { BsonId] public ObjectId Id { get; set; } [BsonElement("SKU")] public int SKU { get; set; } [BsonElement("Description")] public string Description { get; set; } [BsonElement("Price")] public Double Price { get; set; } } // replace with your connection string if it is different const string MongoDBConnectionString = "mongodb://localhost"; public static async Task Main(string[] args) { if (!await UpdateProductsAsync()) { Environment.Exit(1); } Console.WriteLine("Finished updating the product collection"); Console.ReadKey(); } private static async Task UpdateProductsAsync() { // Create client connection to our MongoDB database var client = new MongoClient(MongoDBConnectionString); // Create the collection object that represents the "products" collection var database = client.GetDatabase("MongoDBStore"); var products = database.GetCollection("products"); // Clean up the collection if there is data in there await database.DropCollectionAsync("products"); // collections can't be created inside a transaction so create it first await database.CreateCollectionAsync("products"); // Create a session object that is used when leveraging transactions using (var session = await client.StartSessionAsync()) { // Begin transaction session.StartTransaction(); try { // Create some sample data var tv = new Product { Description = "Television", SKU = 4001, Price = 2000 }; var book = new Product { Description = "A funny book", SKU = 43221, Price = 19.99 }; var dogBowl = new Product { Description = "Bowl for Fido", SKU = 123, Price = 40.00 }; // Insert the sample data await products.InsertOneAsync(session, tv); await products.InsertOneAsync(session, book); await products.InsertOneAsync(session, dogBowl); var resultsBeforeUpdates = await products .Find(session, Builders.Filter.Empty) .ToListAsync(); Console.WriteLine("Original Prices:\n"); foreach (Product d in resultsBeforeUpdates) { Console.WriteLine( String.Format("Product Name: {0}\tPrice: {1:0.00}", d.Description, d.Price) ); } // Increase all the prices by 10% for all products var update = new UpdateDefinitionBuilder() .Mul(r => r.Price, 1.1); await products.UpdateManyAsync(session, Builders.Filter.Empty, update); //,options); // Made it here without error? Let's commit the transaction await session.CommitTransactionAsync(); } catch (Exception e) { Console.WriteLine("Error writing to MongoDB: " + e.Message); await session.AbortTransactionAsync(); return false; } // Let's print the new results to the console Console.WriteLine("\n\nNew Prices (10% increase):\n"); var resultsAfterCommit = await products .Find(session, Builders.Filter.Empty) .ToListAsync(); foreach (Product d in resultsAfterCommit) { Console.WriteLine( String.Format("Product Name: {0}\tPrice: {1:0.00}", d.Description, d.Price) ); } return true; } } } } ``` Source Code available on [Gist. Successful execution yields the following: ## Key points: - You don't have to match class properties to JSON objects - just define a class object and insert it directly into the database. There is no need for an Object Relational Mapper (ORM) layer. - MongoDB transactions use snapshot isolation meaning only the client involved in the transactional session sees any changes until such time as the transaction is committed. - The MongoDB .NET Driver makes it easy to leverage transactions and leverage LINQ based syntax for queries. Additional information about using C# and the .NET driver can be found in the C# and .NET MongoDB Driver documentation.
md
{ "tags": [ "C#", "MongoDB", ".NET" ], "pageDescription": "Walk through an example of how to use transactions in C#.", "contentType": "Tutorial" }
Working with MongoDB Transactions with C# and the .NET Framework
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/connectors/mongodb-connectors-translators-interview
created
# MongoDB Podcast Interview with Connectors and Translators Team The BI Connector and mongomirror are just two examples of powerful but less popular MongoDB products. These products are maintained by a team in MongoDB known as the Connectors and Translators Engineering team. In this podcast episode transcript, we chat with Tim Fogarty, Varsha Subrahmanyam, and Evgeni Dobranov. The team gives us a better understanding of these tools, focusing specifically on the BI Connector and mongomirror. This episode of the MongoDB Podcast is available on YouTube if you prefer to listen. :youtube]{vid=SFezkmAbwos} Michael Lynn (01:58): All right, welcome back. Today, we're talking about connectors and translators and you might be thinking, "Wait a minute. What is a connector and what is a translator?" We're going to get to that. But first, I want to introduce the folks that are joining us on the podcast today. Varsha, would you introduce yourself? Varsha Subrahmanyam (02:19): Yes. Hi, my name is Varsha Subrahmanyam. I'm a software engineer on the translators and connectors team. I graduated from the University of Illinois at Urbana-Champagne in 2019 and was an intern at MongoDB just before graduation. And I returned as a full-timer the following summer. So I've been here for one and a half years. \[inaudible 00:02:43\] Michael Lynn (02:43): Evgeni? Evgeni Dobranov (02:44): Yeah. Hello. My name is Evgeni Dobranov. I'm more or less right alongside Varsha. We interned together in 2018. We both did our rotations just about a year ago and ended up on connector and translators together. I went to Tufts University and graduated in 2019. Michael Lynn (03:02): And Tim, welcome. Tim Fogarty (03:04): Hey, Mike. So I'm Tim Fogarty. I'm also a software engineer on the connectors and translators team. I actually worked for mLab, the MongoDB hosting service, which was acquired by MongoDB about two years ago. So I was working there before MongoDB and now I'm working on the connectors and translators team. Michael Lynn (03:25): Fantastic. And Nic, who are you?? Nic Raboy (03:27): I am Nic and I am Mike's co-host for this fabulous podcast and the developer relations team at MongoDB. Michael Lynn (03:33): Connectors and translators. It's a fascinating topic. We were talking before we started recording and I made the incorrect assumption that connectors and translators are somewhat overlooked and might not even appear on the front page, but that's not the case. So Tim, I wonder if I could ask you to explain what connectors and translators are? What kind of software are we talking about? Tim Fogarty (03:55): Yeah, so our team works on essentially three different software groups. We have the BI Connector or the business intelligence connector, which is used to essentially translate SQL commands into MongoDB commands so that you can use it with tools like Tableau or PowerBI, those kinds of business intelligence tools. Tim Fogarty (04:20): Then we also have the database tools, which are used for importing and exporting data, creating backups on the command line, and then also mongomirror, which is used internally for the Atlas Live Migrates function. So you're able to migrate a MongoDB database into a MongoDB apps cloud service. Tim Fogarty (04:39): The connectors and translators, it's a bit of a confusing name. And we also have other products which are called connectors. So we have the Kafka connector and Spark connector, and we actually don't work on those. So it's a bit of an awkward name, but essentially we're dealing with backups restores, migrations, and translating SQL. Michael Lynn (04:58): So you mentioned the BI Connector and Tableau and being able to use SQL with MongoDB. Can we maybe take a step back and talk about why somebody might even want to use a connector, whether that the BI one or something else with MongoDB? Varsha Subrahmanyam (05:16): Yeah. So I can speak about that a little bit. The reason why we might want to use the BI Connector is for people who use business intelligence tools, they're mostly based on SQL. And so we would like people to use the MongoDB query language. So we basically had this translation engine that connects business intelligence tools to the MongoDB back end. So the BI Connector received SQL queries. And then the BI Connector translates those into SQL, into the MongoDB aggregation language. And then queries MongoDB and then returns the result. So it's very easy to store your data at MongoDB without actually knowing how to query the database with MQL. Michael Lynn (06:03): Is this in real time? Is there a delay or a lag? Varsha Subrahmanyam (06:06): Maybe Evgeni can speak a bit to this? I believe most of this happens in memory. So it's very, very quick and we are able to process, I believe at this point 100% of all SQL queries, if not very close to that. But it is very, very quick. Michael Lynn (06:22): Maybe I've got an infrastructure in place where I'm leveraging a BI tool and I want to make use of the data or an application that leverages MongoDB on the back end. That sounds like a popular used case. I'm curious about how it does that. Is it just a straight translation from the SQL commands and the operators that come to us from SQL? > > >"So if you've heard of transpilers, they translate code from one higher >level language to another. Regular compilers will translate high level >code to lower level code, something like assembly, but the BI Connector >acts like a transpilers where it's translating from SQL to the MongoDB >query language."" -- Varsha Subrahmanyam on the BI Connector > > Varsha Subrahmanyam (06:47): So if you've heard of transpilers, they translate code from one higher level language to another. Regular compilers will translate high level code to lower level code, something like assembly, but the BI Connector acts like a transpilers where it's translating from SQL to the MongoDB query language. And there are multiple steps to a traditional compiler. There's the front end that basically verifies the SQL query from both a semantic and syntactic perspective. Varsha Subrahmanyam (07:19): So kind of like does this query make sense given the context of the language itself and the more granularly the database in question. And then there are two more steps. There's the middle end and the back end. They basically just after verifying the query is acceptable, will then actually step into the translation process. Varsha Subrahmanyam (07:40): We basically from the syntactic parsing segment of the compiler, we produce this parse tree which basically takes all the tokens, constructs the tree out of them using the grammar of SQL and then based off of that, we will then start the translation process. And there's something called push-down. Evgeni, if you want to talk about that. Evgeni Dobranov (08:03): Yeah, I actually have not done or worked with any code that does push-down specifically, unfortunately. Varsha Subrahmanyam (08:09): I can talk about that. Evgeni Dobranov (08:13): Yeah. It might be better for you. Varsha Subrahmanyam (08:13): Yeah. In push-down basically, we basically had this parse tree and then from that we construct something called a [query plan, which basically creates stages for every single part of the SQL query. And stages are our internal representation of what those tokens mean. So then we construct like a linear plan, and this gets us into something called push-down. Varsha Subrahmanyam (08:42): So basically let's say you have, I suppose like a normal SELECT query. The SELECT will then be a stage in our intermediate representation of the query. And that slowly will just translate single token into the equivalent thing in MQL. And we'll do that in more of a linear fashion, and that slowly will just generate the MQL representation of the query. Michael Lynn (09:05): Now, there are differences in the way that data is represented between a relational or tabular database and the way that MongoDB represents it in document. I guess, through the push-down and through the tokenization, you're able to determine when a SQL statement comes in that is referencing what would be columns if there's a translator that makes that reference field. Varsha Subrahmanyam (09:31): Right, right. So we have similar kinds of ways of translating things from the relational model to the document model. Tim Fogarty (09:39): So we have to either sample or set a specific schema for the core collection so that it looks like it's a table with columns. Mike, maybe you can talk a little bit more about that. Michael Lynn (09:55): Yeah. So is there a requirement to use the BI Connector around normalizing your data or providing some kind of hint about how you're representing the data? Varsha Subrahmanyam (10:06): That I'm not too familiar with. Nic Raboy (10:10): How do you even develop such a connector? What kind of technologies are you using? Are you using any of the MongoDB drivers in the process as well? Varsha Subrahmanyam (10:18): I know for the BI Connector, a lot of the code was borrowed from existing parsing logic. And then it's all written in Go. Everything on our team is written in Go. It's been awhile since I have been on this recode, so I am not too sure about specific technologies that are used. I don't know if you recall, Evgeni. Evgeni Dobranov (10:40): Well, I think the biggest thing is the Mongo AST, the abstract syntax tree, which has also both in Go and that sort of like, I think what Varsha alluded to earlier was like the big intermediate stage that helps translate SQL queries to Mongo queries by representing things like taking a programming language class in university. It sort of represents things as nodes in a tree and sort of like relates how different like nouns to verbs and things like that in like a more grammatical sense. Michael Lynn (11:11): Is the BI Connector open source? Can people take a look at the source code to see how it works? Evgeni Dobranov (11:16): It is not, as far as I know, no. Michael Lynn (11:19): That's the BI Connector. I'm sure there's other connectors that you work on. Let's talk a little bit about the other connectors that you guys work on. Nic Raboy (11:26): Yeah. Maybe what's the most interesting one. What's your personal favorites? I mean, you're probably all working on one separately, but is there one that's like commonly cool and commonly beneficial to the MongoDB customers? Evgeni Dobranov (11:39): Well, the one I've worked on the most recently personally at least has been mongomirror and I've actually come to like it quite a bit just because I think it has a lot of really cool components. So just as a refresher, mongomirror is the tool that we use or the primary tool that Atlas uses to help customers with live migration. So what this helps them essentially do is they could just be running a database, taking in writes and reads and things like that. And then without essentially shutting down the database, they can migrate over to a newer version of Mongo. Maybe just like bigger clusters, things like that, all using mongomirror. Evgeni Dobranov (12:16): And mongomirror has a couple of stages that it does in order to help with the migration. It does like an initial sync or just copies the existing data as much as it can. And then it also records. It also records operations coming in as well and puts them in the oplog, which is essentially another collection of all the operations that are being done on the database while the initial sync is happening. And then replays this data on top of your destination, the thing that you're migrating to. Evgeni Dobranov (12:46): So there's a lot of juggling basically with operations and data copying, things like that. I think it's a very robust system that seems to work well most of the time actually. I think it's a very nicely engineered piece of software. Nic Raboy (13:02): I wanted to comment on this too. So this is a plug to the event that we actually had recently called MongoDB Live for one of our local events though for North America. I actually sat in on a few sessions and there were customer migration stories where they actually used mongomirror to migrate from on-premise solutions to MongoDB Atlas. It seems like it's the number one tool for getting that job done. Is this a common scenario that you have run into as well? Are people using it for other types of migrations as well? Like maybe Atlas, maybe AWS to GCP even though that we have multi-cloud now, or is it mostly on prem to Atlas kind of migrations? Evgeni Dobranov (13:43): We work more on maintaining the software itself, having taken the request from the features from the Atlas team. The people that would know exactly these details, I think would be the TSEs, the technical services engineers, who are the ones working with the actual customers, and they receive more information about exactly what type of migration is happening, whether it's from private database or Mongo Atlas or private to private, things like that. But I do know for a fact that you have all combinations of migrations. Mongomirror is not limited to a single type. Tim can expand more on this for sure. Tim Fogarty (14:18): Yeah. I'd say definitely migrating from on-prem to Atlas is the number one use case we see that's actually the only technically officially supported use case. So there are customers who are doing other things like they're migrating on-prem to on-prem or one cloud to another cloud. So it definitely does happen. But by far, the largest use case is migrating to Atlas. And that is the only use case that we officially test for and support. Nic Raboy (14:49): I actually want to dig deeper into mongomirror as well. I mean, how much data can you move with it at a certain time? Do you typically like use a cluster of these mongomirrors in parallel to move your however many terabytes you might have in your cluster? Or maybe go into the finer details on how it works? Tim Fogarty (15:09): Yeah, that would be cool, but that would be much more difficult. So we generally only spin up one mongomirror machine. So if we have a source cluster that's on-prem, and then we have our destination cluster, which is MongoDB Atlas, we spin up a machine that's hosted by us or you can run MongoDB on-prem yourself, if you want to, if there are, let's say firewall concerns, and sometimes make it a little bit easier. Tim Fogarty (15:35): But a single process and then the person itself is paralyzed. So it will, during the initial sync stage Evgeni mentioned, it will copy over all of the data for each collection in parallel, and then it will start building indexes in parallels as well. You can migrate over terabytes of data, but it can take a very long time. It can be a long running process. We've definitely seen customers where if they've got very large data sets, it can take weeks to migrate. And particularly the index build phase takes a long time because that's just a very compute intensive like hundreds of thousands of indexes on a very large data set. > > >"But then once the initial sync is over, then we're just in the business >of replicating any changes that happen to the source database to the >destination cluster." -- Tim Fogarty on the mongomirror process of >migrating data from one cluster to another. > > Tim Fogarty (16:18): But then once the initial sync is over, then we're just in the business of replicating any changes that happen to the source database to the destination cluster. Nic Raboy (16:28): So when you say changes that happened to the source database, are you talking about changes that might have occurred while that migration was happening? Tim Fogarty (16:35): Exactly. Nic Raboy (16:36): Or something else? Tim Fogarty (16:38): While the initial sync happens, we buffer all of the changes that happened to the source destination to a file. So we essentially just save them on disc, ready to replay them once we're finished with the initial sync. So then once the initial sync has finished, we replay everything that happened during the initial sync and then everything new that comes in, we also start to replay that once that's done. So we keep the two clusters in sync until the user is ready to cut over the application from there to source database over to their new destination cluster. Nic Raboy (17:12): When it copies over the data, is it using the same object IDs from the source database or is it creating new documents on the destination database? Tim Fogarty (17:23): Yeah. The object IDs are the same, I believe. And this is a kind of requirement because in the oplog, it will say like, "Oh, this document with this object ID, we need to update it or change it in this way." So when we need to reapply those changes to the destination kind of cluster, then we need to make sure that obviously the object ID matches that we're changing the right document when we need to reapply those changes. Michael Lynn (17:50): Okay. So there's two sources of data used in a mongomirror execution. There's the database, the source database itself, and it sounds like mongomirror is doing, I don't know, a standard find getting all of the documents from there, transmitting those to the new, the target system and leveraging an explicit ID reference so that the documents that are inserted have the same object ID. And then during that time, that's going to take a while, this is physics, folks. It's going to take a while to move those all over, depending on the size of the database. Michael Lynn (18:26): I'm assuming there's a marketplace in the oplog or at least the timestamp of the, the time that the mongomirror execution began. And then everything between that time and the completion of the initial sync is captured in oplog, and those transactions in the oplog are used to recreate the transactions that occurred in the target database. Tim Fogarty (18:48): Yeah, essentially correct. The one thing is the initial sync phase can take a long time. So it's possible that your oplog, because the oplog is a cap collection, which means it can only be a certain finite size. So eventually the older entries just start getting deleted when they're not used. As soon as we start the initial sync, we start listening to the oplog and saving it to the disc that we have the information saved. So if we start deleting things off the back of the oplog, we don't essentially get lost. Michael Lynn (19:19): Great. So I guess a word of caution would be ensure that you have enough disc space available to you in order to execute. Tim Fogarty (19:26): Yes, exactly. Michael Lynn (19:29): That's mongomirror. That's great. And I wanted to clarify, mongomirror, It sounds like it's available from the MongoDB Atlas console, right? Because we're going to execute that from the console, but it also sounds like you said it might be available for on-prem. Is it a downloadable? Is it an executable command line? Tim Fogarty (19:47): Yeah. So in general, if you want to migrate into Atlas, then you should use the Atlas Live Migrate service. So that's available on the Atlas console. It's like click and set it up and that's the easiest way to use it. There are some cases where for some reason you might need to run mongomirror locally, in which case you can download the binaries and run it locally. Those are kind of rare cases. I think that's probably something you should talk to support about if you're concerned that you might work locally. Nic Raboy (20:21): So in regards to the connectors like mongomirror, is there anything that you've done recently towards the product or anything that's coming soon on the roadmap? Evgeni Dobranov (20:29): So Varsha and I just finished a big epic on Jira, which improves status reporting. And basically this was like a huge collection of tickets that customers have come to us over time, basically just saying, "We wish there was a better status here. We wish there was a better logging or I wish the logs gave us a better idea of what was going on in mongomirror internally. So we basically spent about a month or so, and Varsha spent quite a bit of time on a ticket recently that she can talk about. We just spent a lot of time improving error messages and revealing information that previously wasn't revealed to help users get a better idea of what's going on in the internals of mongomirror. Varsha Subrahmanyam (21:12): Yeah. The ticket I just finished but was working on for quite some time, was to provide better logging during the index building process, which happens during initial sync and then again during all oplog sync. Now, users will be able to get logs at a collection level telling them what percentage of indexes have been built on a particular collection as well as on each host in their replica set. And then also if they wanted to roll that information from the HTTP server, then they can also do that. Varsha Subrahmanyam (21:48): So that's an exciting addition, I think. And now I'm also enabling those logs in the oplog sync portion of mongomirror, which is pretty similar, but probably we'll probably have a little bit less information just because we're figuring out which indexes need to be built on a rolling basis because we're just tailoring the oplog and seeing what comes up. So by the nature of that, there's a little less information on how many indexes can you expect to be built. You don't exactly know from the get-go, but yeah, I think that'll be hopefully a great help to people who are unsure if their indexes are stalled or are just taking a long time to build. Michael Lynn (22:30): Well, some fantastic updates. I want to thank you all for stopping by. I know we've got an entire set of content that I wanted to cover around the tools that you work on. Mongoimport, Mongoexport, Mongorestore, Mongodump. But I think I'd like to give that the time that it deserves. That could be a really healthy discussion. So I think what I'd like to do is get you guys to come back. That sound good? Varsha Subrahmanyam (22:55): Yeah. Tim Fogarty (22:56): Yeah. Varsha Subrahmanyam (22:56): Sounds good. Evgeni Dobranov (22:56): Yeah. Sounds great. Michael Lynn (22:57): Well, again, I want to thank you very much. Is there anything else you want the audience to know before we go? How can they reach out to you? Are you on social media, LinkedIn, Twitter? This is a time to plug yourself. Varsha Subrahmanyam (23:09): You can find me on LinkedIn. Tim Fogarty (23:12): I'm trying to stay away from social media recently. Nic Raboy (23:15): No problem. Tim Fogarty (23:16): No, please don't contact me. Michael Lynn (23:19): I get that. I get it. Tim Fogarty (23:21): You can contact me, I'll tell you where, on the community forums. Michael Lynn (23:25): There you go. Perfect. Tim Fogarty (23:27): If you have questions- Michael Lynn (23:28): Great. Tim Fogarty (23:29): If you have questions about the database tools, then you can ask questions there and I'll probably see it. Michael Lynn (23:34): All right. So community.mongodb.com. We'll all be there. If you have questions, you can swing by and ask them in that forum. Well, thanks once again, everybody. Tim Fogarty, Varsha Subrahmanyam, and Evgeni Dobranov. Evgeni Dobranov (23:47): Yes, you got it. Michael Lynn (23:48): All right. So thanks so much for stopping by. Have a great day. Varsha Subrahmanyam (23:52): Thank you. ## Summary I hope you enjoyed this episode of the MongoDB Podcast and learned a bit more about the MongoDB Connectors and Translators including the Connector for Business Intelligence and mongomirror. If you enjoyed this episode, please consider giving a review on your favorite podcast networks including Apple, Google, and Spotify. For more information on the BI Connector, visit our docs or product pages. For more information on mongomirror, visit the docs.
md
{ "tags": [ "Connectors", "Kafka", "Spark" ], "pageDescription": "MongoDB Podcast Interview with Connectors and Translators Team", "contentType": "Podcast" }
MongoDB Podcast Interview with Connectors and Translators Team
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/triggers-tricks-data-driven-schedule
created
# Realm Triggers Treats and Tricks - Document-Based Trigger Scheduling In this blog series, we are trying to inspire you with some reactive Realm trigger use cases. We hope these will help you bring your application pipelines to the next level. Essentially, triggers are components in our Atlas projects/Realm apps that allow a user to define a custom function to be invoked on a specific event. - **Database triggers:** We have triggers that can be scheduled based on database events—like `deletes`, `inserts`, `updates`, and `replaces`—called database triggers. - **Scheduled triggers:** We can schedule a trigger based on a `cron` expression via scheduled triggers. - **Authentication triggers:** These triggers are only relevant for Realm authentication. They are triggered by one of the Realm auth providers' authentication events and can be configured only via a Realm application. For this blog post, I would like to focus on trigger scheduling patterns. Let me present a use case and we will see how the discussed mechanics might help us in this scenario. Consider a meeting management application that schedules meetings and as part of its functionality needs to notify a user 10 minutes before the meeting. How would we create a trigger that will be fired 10 minutes before a timestamp which is only known by the "meeting" document? First, let's have a look at the meetings collection documents example: ``` javascript { _id : ObjectId("5ca4bbcea2dd94ee58162aa7"), event : "Mooz Meeting", eventDate : ISODate("2021-03-20:14:00:00Z"), meetingUrl : "https://mooz.meeting.com/5ca4bbcea2dd94ee58162aa7", invites : "[email protected]", "[email protected]"] } ``` I wanted to share an interesting solution based on triggers, and throughout this article, we will use a meeting notification example to explain the discussed approach. ## Prerequisites First, verify that you have an Atlas project with owner privileges to create triggers. - [MongoDB Atlas account, Atlas cluster - A MongoDB Realm application or access to MongoDB Atlas triggers. > If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post. ## The Idea Behind the Main Mechanism I will use the event example as a source document initiating the flow with an insert to a `meetings` collection: ``` javascript { _id : ObjectId("5ca4bbcea2dd94ee58162aa7"), event : "Mooz Meeting", eventDate : ISODate("2021-03-20:11:00:00Z"), meetingUrl : "https://mooz.meeting.com/5ca4bbcea2dd94ee58162aa7" invites : "[email protected]"], phone : "+123456789" } ``` Once we insert this document into the `meetings` collection, it will create the following record in a helper collection called `notifications` using an insert trigger: ``` javascript { _id : ObjectId("5ca4bbcea2dd94ee58162aa7"), triggerDate : ISODate("2021-03-10:50:00:00Z") } ``` The time and `_id` are calculated from the source document and aim to fire once `2021-03-10:50:00:00Z` arrives via a `fireScheduleTasks` trigger. This trigger is based on a delete operation out of a [TTL index on the `triggerDate` field from the `notifications`. This is when the user gets the reminder! On a high level, here is the flow described above. A meeting document is tracked by a trigger, creating a notification document. This document at the specified time will cause a delete event. The delete will fire a notification trigger to notify the user. There are three main components that allow our system to trigger based on our document data. ## 1. Define a Notifications Helper Collection First, we need to prepare our `notifications` collection. This collection will be created implicitly by the following index creation command. Now we will create a TTL index. This index will cause the schedule document to expire when the value in `triggerDate` field arrives at its expiry lifetime of 0 seconds after its value. ``` javascript db.notifications.createIndex( { "triggerDate": 1 }, { expireAfterSeconds: 0 } ) ``` ## 2. Building a Trigger to Populate the Schedule Collection When setting up your `scheduleTasks` trigger, make sure you provide the following: 1. Linked Atlas service and verify its name. 2. The database and collection name we are basing the scheduling on, e.g., `meetings`. 3. The relevant trigger operation that we want to schedule upon, e.g., when an event is inserted. 4. Link it to a function that will perform the schedule collection population. My trigger UI configuration to populate the scheduling collection. To populate the `notifications` collection with relevant triggering dates, we need to monitor our documents in the source collection. In our case, the user's upcoming meeting data is stored in the "meeting" collection with the userId field. Our trigger will monitor inserts to populate a Scheduled document. ``` javascript exports = function(changeEvent) { // Get the notifications collection const coll = context.services.get("").db("").collection("notifications"); // Calculate the "triggerDate" and populate the trigger collection and duplicate the _id const calcTriggerDate = new Date(changeEvent.fullDocument.eventDate - 10 * 60000); return coll.insertOne({_id:changeEvent.fullDocument._id,triggerDate: calcTriggerDate }); }; ``` >Important: Please replace \ and \ with your linked service and database names. ## 3. Building the Trigger to Perform the Action on the "Trigger Date" To react to the TTL "delete" event happening exactly when we want our scheduled task to be executed, we need to use an "on delete" database trigger I call `fireScheduleTasks`. When setting up your `fireScheduleTasks` trigger, make sure you provide the following: 1. Linked Atlas service and verify its name. 2. The database and collection for the notifications collection, e.g., `notifications`. 3. The relevant trigger operation that we want to schedule upon, which is "DELETE." 4. Link it to a function that will perform the fired task. Now that we have populated the `notifications` collection with the `triggerDate`, we know the TTL index will fire a "delete" event with the relevant deleted `_id` so we can act upon our task. In my case, 10 minutes before the user's event starts, my document will reach its lifetime and I will send a text using Twilio service to the attendee's phone. A prerequisite for this stage will be to set up a Twilio service using your Twilio cloud credentials. 1. Make sure you have a Twilio cloud account with its SID and your Auth token. 2. Set up the SID and Auth token into the Realm Twilio service configuration. 3. Configure your Twilio Messaging service and phone number. Once we have it in place, we can use it to send SMS notifications to our invites. ``` javascript exports = async function(changeEvent) { // Get meetings collection const coll = context.services.get("").db("").collection("meetings"); // Read specific meeting document const doc = await coll.findOne({ _id: changeEvent.documentKey._id}); // Send notification via Twilio SMS const twilio = context.services.get(""); twilio.send({ to: doc.phone, from: "+123456789", body: `Reminder : Event ${doc.event} is about to start in 10min at ${doc.scheduledOn}` }); }; ``` >Important: Replace \ and \ with your linked service and database names. That's how the event was fired at the appropriate time. ## Wrap Up With the presented technique, we can leverage existing triggering patterns to build new ones. This may open your mind to other ideas to design your next flows on MongoDB Realm. In the following article in this series, we will learn how we can implement auto-increment with triggers. > If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
md
{ "tags": [ "Atlas" ], "pageDescription": "In this article, we will explore a trick that lets us invoke a trigger task based on a date document field in our collections.", "contentType": "Article" }
Realm Triggers Treats and Tricks - Document-Based Trigger Scheduling
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/csharp/sending-requesting-data-mongodb-unity-game
created
# Sending and Requesting Data from MongoDB in a Unity Game Are you working on a game in Unity and finding yourself needing to make use of a database in the cloud? Storing your data locally works for a lot of games, but there are many gaming scenarios where you'd need to leverage an external database. Maybe you need to submit your high score for a leaderboard, or maybe you need to save your player stats and inventory so you can play on numerous devices. There are too many reasons to list as to why a remote database might make sense for your game. If you've been keeping up with the content publishing on the MongoDB Developer Hub and our Twitch channel, you'll know that I'm working on a game development series with Adrienne Tacke. This series is centered around creating a 2D multiplayer game with Unity that uses MongoDB as part of the online component. Up until now, we haven't actually had the game communicate with MongoDB. In this tutorial, we're going to see how to make HTTP requests from a Unity game to a back end that communicates with MongoDB. The back end was already developed in a tutorial titled, Creating a User Profile Store for a Game With Node.js and MongoDB. We're now going to leverage it in our game. To get an idea where we're at in the tutorial series, take a look at the animated image below: To take this to the next level, it makes sense to send data to MongoDB when the player crosses the finish line. For example, we can send how many steps were taken by the player in order to reach the finish line, or how many times the player collided with something, or even what place the player ranked in upon completion. The data being sent doesn't truly matter as of now. The assumption is that you've been following along with the tutorial series and are jumping in where we left off. If not, some of the steps that refer to our project may not make sense, but the concepts can be applied in your own game. The tutorials in this series so far are: - Designing a Strategy to Develop a Game with Unity and MongoDB - Creating a User Profile Store for a Game with Node.js and MongoDB - Getting Started with Unity for Creating a 2D Game - Designing and Developing 2D Game Levels with Unity and C# If you'd like to view the source code to the project, it can be found on GitHub. ## Creating a C# Class in Unity to Represent the Data Model Within MongoDB Because Unity, as of now, doesn't have an official MongoDB driver, sending and receiving MongoDB data from Unity isn't handled for you. We're going to have to worry about marshalling and unmarshalling our data as well as making the request. In other words, we're going to need to manipulate our data manually to and from JSON and C# classes. To make this possible, we're going to need to start with a class that represents our data model in MongoDB. Within your project's **Assets/Scripts** directory, create a **PlayerData.cs** file with the following code: ``` csharp using UnityEngine; public class PlayerData { public string plummie_tag; public int collisions; public int steps; } ``` Notice that this class does not extend the `MonoBehavior` class. This is because we do not plan to attach this script as a component on a game object. The `public`-defined properties in the `PlayerData` class represent each of our database fields. In the above example, we only have a select few, but you could add everything from our user profile store if you wanted to. It is important to use the `public` identifier for anything that will have relevance to the database. We need to make a few more changes to the `PlayerData` class. Add the following functions to the class: ``` csharp using UnityEngine; public class PlayerData { // 'public' variables here ... public string Stringify() { return JsonUtility.ToJson(this); } public static PlayerData Parse(string json) { return JsonUtility.FromJson(json); } } ``` Notice the function names are kind of like what you'd find in JavaScript if you are a JavaScript developer. Unity expects us to send string data in our requests rather than objects. The good news is that Unity also provides a helper `JsonUtility` class that will convert objects to strings and strings to objects. The `Stringify` function will take all `public` variables in the class and convert them to a JSON string. The fields in the JSON object will match the names of the variables in the class. The `Parse` function will take a JSON string and convert it back into an object that can be used within C#. ## Sending Data with POST and Retrieving Data with GET in a Unity Game With a class available to represent our data model, we can now send data to MongoDB as well as retrieve it. Unity provides a UnityWebRequest class for making HTTP requests within a game. This will be used to communicate with either a back end designed with a particular programming language or a MongoDB Realm webhook. If you'd like to learn about creating a back end to be used with a game, check out my previous tutorial on the topic. We're going to spend the rest of our time in the project's **Assets/Scripts/Player.cs** file. This script is attached to our player as a component and was created in the tutorial titled, Getting Started with Unity for Creating a 2D Game. In your own game, it doesn't really matter which game object script you use. Open the **Assets/Scripts/Player.cs** file and make sure it looks similar to the following: ``` csharp using UnityEngine; using System.Text; using UnityEngine.Networking; using System.Collections; public class Player : MonoBehaviour { public float speed = 1.5f; private Rigidbody2D _rigidBody2D; private Vector2 _movement; void Start() { _rigidBody2D = GetComponent(); } void Update() { // Mouse and keyboard input logic here ... } void FixedUpdate() { // Physics related updates here ... } } ``` I've stripped out a bunch of code from the previous tutorial as it doesn't affect anything we're planning on doing. The previous code was very heavily related to moving the player around on the screen and should be left in for the real game, but is overlooked in this example, at least for now. Two things to notice that are important are the imports: ``` csharp using System.Text; using UnityEngine.Networking; ``` The above two imports are important for the networking features of Unity. Without them, we wouldn't be able to properly make GET and POST requests. Before we make a request, let's get our `PlayerData` class included. Make the following changes to the **Assets/Scripts/Player.cs** code: ``` csharp using UnityEngine; using System.Text; using UnityEngine.Networking; using System.Collections; public class Player : MonoBehaviour { public float speed = 1.5f; private Rigidbody2D _rigidBody2D; private Vector2 _movement; private PlayerData _playerData; void Start() { _rigidBody2D = GetComponent(); _playerData = new PlayerData(); _playerData.plummie_tag = "nraboy"; } void Update() { } void FixedUpdate() { } void OnCollisionEnter2D(Collision2D collision) { _playerData.collisions++; } } ``` In the above code, notice that we are creating a new `PlayerData` object and assigning the `plummie_tag` field a value. We're also making use of an `OnCollisionEnter2D` function to see if our game object collides with anything. Since our function is very vanilla, collisions can be with walls, objects, etc., and nothing in particular. The collisions will increase the `collisions` counter. So, we have data to work with, data that we need to send to MongoDB. To do this, we need to create some `IEnumerator` functions and make use of coroutine calls within Unity. This will allow us to do asynchronous activities such as make web requests. Within the **Assets/Scripts/Player.cs** file, add the following `IEnumerator` function: ``` csharp IEnumerator Download(string id, System.Action callback = null) { using (UnityWebRequest request = UnityWebRequest.Get("http://localhost:3000/plummies/" + id)) { yield return request.SendWebRequest(); if (request.isNetworkError || request.isHttpError) { Debug.Log(request.error); if (callback != null) { callback.Invoke(null); } } else { if (callback != null) { callback.Invoke(PlayerData.Parse(request.downloadHandler.text)); } } } } ``` The `Download` function will be responsible for retrieving data from our database to be brought into the Unity game. It is expecting an `id` which we'll use a `plummie_id` for and a `callback` so we can work with the response outside of the function. The response should be `PlayerData` which is that of the data model we just made. After sending the request, we check to see if there were errors or if it succeeded. If the request succeeded, we can convert the JSON string into an object and invoke the callback so that the parent can work with the result. Sending data with a payload, like that in a POST request, is a bit different. Take the following function: ``` csharp IEnumerator Upload(string profile, System.Action callback = null) { using (UnityWebRequest request = new UnityWebRequest("http://localhost:3000/plummies", "POST")) { request.SetRequestHeader("Content-Type", "application/json"); byte] bodyRaw = Encoding.UTF8.GetBytes(profile); request.uploadHandler = new UploadHandlerRaw(bodyRaw); request.downloadHandler = new DownloadHandlerBuffer(); yield return request.SendWebRequest(); if (request.isNetworkError || request.isHttpError) { Debug.Log(request.error); if(callback != null) { callback.Invoke(false); } } else { if(callback != null) { callback.Invoke(request.downloadHandler.text != "{}"); } } } } ``` In the `Upload` function, we are expecting a JSON string of our profile. This profile was defined in the `PlayerData` class and it is the same data we received in the `Download` function. The difference between these two functions is that the POST is sending a payload. For this to work, the JSON string needs to be converted to `byte[]` and the upload and download handlers need to be defined. Once this is done, it is business as usual. It is up to you what you want to return back to the parent. Because we are creating data, I thought it'd be fine to just return `true` if successful and `false` if not. To demonstrate this, if there are no errors, the response is compared against an empty object string. If an empty object comes back, then false. Otherwise, true. This probably isn't the best way to respond after a creation, but that is up to the creator (you, the developer) to decide. The functions are created. Now, we need to use them. Let's make a change to the `Start` function: ``` csharp void Start() { _rigidBody2D = GetComponent(); _playerData = new PlayerData(); _playerData.plummie_tag = "nraboy"; StartCoroutine(Download(_playerData.plummie_tag, result => { Debug.Log(result); })); } ``` When the script runs—or in our, example when the game runs—and the player enters the scene, the `StartCoroutine` method is executed. We are providing the `plummie_tag` as our lookup value and we are printing out the results that come back. We might want the `Upload` function to behave a little differently. Instead of making the request immediately, maybe we want to make the request when the player crosses the finish line. For this, maybe we add some logic to the `FixedUpdate` method instead: ``` csharp void FixedUpdate() { // Movement logic here ... if(_rigidBody2D.position.x > 24.0f) { StartCoroutine(Upload(_playerData.Stringify(), result => { Debug.Log(result); })); } } ``` In the above code, we check to see if the player position is beyond a certain value in the x-axis. If this is true, we execute the `Upload` function and print the results. The above example isn't without issues though. As of now, if we cross the finish line, we're going to experience many requests as our code will continuously execute. We can correct this by adding a boolean variable into the mix. At the top of your **Assets/Scripts/Player.cs** file with the rest of your variable declarations, add the following: ``` csharp private bool _isGameOver; ``` The idea is that when the `_isGameOver` variable is true, we shouldn't be executing certain logic such as the web requests. We are going to initialize the variable as false in the `Start` method like so: ``` csharp void Start() { // Previous code here ... _isGameOver = false; } ``` With the variable initialized, we can make use of it prior to sending an HTTP request after crossing the finish line. To do this, we'd make a slight adjustment to the code like so: ``` csharp void FixedUpdate() { // Movement logic here ... if(_rigidBody2D.position.x > 24.0f && _isGameOver == false) { StartCoroutine(Upload(_playerData.Stringify(), result => { Debug.Log(result); })); _isGameOver = true; } } ``` After the player crosses the finish line, the HTTP code is executed and the game is marked as game over for the player, preventing further requests. ## Conclusion You just saw how to use the `UnityWebRequest` class in Unity to make HTTP requests from a game to a remote web server that communicates with MongoDB. This is valuable for any game that needs to either store game information remotely or retrieve it. There are plenty of other ways to make use of the `UnityWebRequest` class, even in our own player script, but the examples we used should be a great starting point. This tutorial series is part of a series streamed on Twitch. To see these streams live as they happen, follow the [Twitch channel and tune in.
md
{ "tags": [ "C#", "Unity" ], "pageDescription": "Learn how to interact with MongoDB from a Unity game with C# and the UnityWebRequest class.", "contentType": "Tutorial" }
Sending and Requesting Data from MongoDB in a Unity Game
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/products/atlas/building-e-commerce-content-catalog-atlas-search
created
# Building an E-commerce Content Catalog with Atlas Search Search is now a fundamental part of applications across all industries—but especially so in the world of retail and e-commerce. If your customers can’t find what they’re looking for, they’ll go to another website and buy it there instead. The best way to provide your customers with a great shopping experience is to provide a great search experience. As far as searching goes, Atlas Search, part of MongoDB Atlas, is the easiest way to build rich, fast, and relevance-based search directly into your applications. In this tutorial, we’ll make a website that has a simple text search and use Atlas Search to integrate full-text search capabilities, add autocomplete to our search box, and even promote some of our products on sale. ## Pre-requisites You can find the complete source code for this application on Github. The application is built using the MERN stack. It has a Node.js back end running the express framework, a MongoDB Atlas database, and a React front end. ## Getting started First, start by cloning the repository that contains the starting source code. ```bash git clone https://github.com/mongodb-developer/content-catalog cd content-catalog ``` In this repository, you will see three sub-folders: * `mdbstore`: contains the front end * `backend`: has the Node.js back end * `data`: includes a dataset that you can use with this e-commerce application ### Create a database and import the dataset First, start by creating a free MongoDB Atlas cluster by following the instructions from the docs. Once you have a cluster up and running, find your connection string. You will use this connection string with `mongorestore` to import the provided dataset into your cluster. >You can find the installation instructions and usage information for `mongorestore` from the MongoDB documentation. Use your connection string without the database name at the end. It should look like `mongodb+srv://user:[email protected]` ```bash cd data mongorestore ``` This tool will automatically locate the BSON file from the dump folder and import these documents into the `items` collection inside the `grocery` database. You now have a dataset of about 20,000 items to use and explore. ### Start the Node.js backend API The Node.js back end will act as an API that your front end can use. It will be connecting to your database by using a connection string provided in a `.env` file. Start by creating that file. ```bash cd backend touch .env ``` Open your favourite code editor, and enter the following in the `.env` file. Change to your current connection string from MongoDB Atlas. ``` PORT=5050 MONGODB_URI= ``` Now, start your server. You can use the `node` executable to start your server, but it’s easier to use `nodemon` while in development. This tool will automatically reload your server when it detects a change to the source code. You can find out more about installing the tool from the official website. ```bash nodemon . ``` This command will start the server. You should see a message in your console confirming that the server is running and the database is connected. ### Start the React frontend application It’s now time to start the front end of your application. In a new terminal window, go to the `mdbstore` folder, install all the dependencies for this project, and start the project using `npm`. ```bash cd ../mdbstore npm install npm start ``` Once this is completed, a browser tab will open, and you will see your fully functioning store. The front end is a React application. Everything in the front end is already connected to the backend API, so we won’t be making any changes here. Feel free to explore the source code to learn more about using React with a Node.js back end. ### Explore the application Your storefront is now up and running. A single page lets you search for and list all products. Try searching for `chicken`. Well, you probably don’t have a lot of results. As a matter of fact, you won't find any result. Now try `Boneless Chicken Thighs`. There’s a match! But that’s not very convenient. Your users don’t know the exact name of your products. Never mind possible typos or mistakes. This e-commerce offers a very poor experience to its customers and risks losing some business. In this tutorial, you will see how to leverage Atlas Search to provide a seamless experience to your users. ## Add full-text search capabilities The first thing we’ll do for our users is to add full-text search capabilities to this e-commerce application. By adding a search index, we will have the ability to search through all the text fields from our documents. So, instead of searching only for a product name, we can search through the name, category, tags, and so on. Start by creating a search index on your collection. Find your collection in the MongoDB Atlas UI and click on Search in the top navigation bar. This will bring you to the Atlas Search Index creation screen. Click on Create Index. From this screen, click Next to use the visual editor. Then, choose the newly imported data—‘grocery/items’, on the database and collection screen. Accept all the defaults and create that index. While you’re there, you can also create the index that will be used later for autocomplete. Click Create Index again, and click Next to use the visual editor. Give this new index the name `autocomplete`, select ‘grocery/items’ again, and then click Next. On the following screen, click the Refine Index button to add the autocomplete capabilities to the index. Click on the Add Field button to add a new field that will support autocomplete searches. Choose the `name` field in the dropdown. Then toggle off the `Enable Dynamic Mapping` option. Finally, click Add data type, and from the dropdown, pick autocomplete. You can save these settings and click on the Create Search Index button. You can find the detailed instructions to set up the index in this tutorial. Once your index is created, you will be able to use the $search stage in an aggregation pipeline. The $search stage enables you to perform a full-text search in your collections. You can experiment by going to the Aggregations tab once you’ve selected your collection or using Compass, the MongoDB GUI. The first aggregation pipeline we will create is for the search results. Rather than returning only results that have an exact match, we will use Altas Search to return all similar results or close to the user search intent. In the Aggregation Builder screen, create a new pipeline by adding a first $search stage. You use the following JSON for the first stage of your pipeline. ```javascript { index: 'default', text: { query: "chicken", path: "name"] } } ``` And voilà! You already have much better search results. You could also add other [stages here to limit the number of results or sort them in a specific order. For this application, this is all we need for now. Let’s try to import this into the API used for this project. In the file _backend/index.js_, look for the route that listens for GET requests on `/search/:query`. Here, replace the code between the comments with the code you used for your aggregation pipeline. This time, rather than using the hard-coded value, use `req.params.query` to use the query string sent to the server. ```javascript /** TODO: Update this to use Atlas Search */ results = await itemCollection.aggregate( { $search: { index: 'default', text: { query: req.params.query, path: ["name"] } } } ]).toArray(); /** End */ ``` The old code used the `find()` method to find an exact match. This new code uses the newly created Search index to return any records that would contain, in part or in full, the search term that we’ve passed to it. If you try the application again with the word “Chicken,” you will get much more results this time. In addition to that, you might also notice that your searches are also case insensitive. But we can do even better. Sometimes, your users might be searching for more generic terms, such as one of the tags that describe the products or the brand name. Let’s add more fields to this search to return more relevant records. In the `$search` stage that you added in the previous code snippet, change the value of the path field to contain all the fields you want to search. ```javascript /** TODO: Update this to use Atlas Search */ results = await itemCollection.aggregate([ { $search: { index: 'default', text: { query: req.params.query, path: ["name", "brand", "category", "tags"] } } } ]).toArray(); /** End */ ``` Experiment with your new application again. Try out some brand names that you know to see if you can find the product you are looking for. Your search capabilities are now much better, and the user experience of your website is already improved, but let’s see if we can make this even better. ## Add autocomplete to your search box A common feature of most modern search engines is an autocomplete dropdown that shows suggestions as you type. In fact, this is expected behaviour from users. They don’t want to scroll through an infinite list of possible matches; they’d rather find the right one quickly. In this section, you will use the Atlas Search autocomplete capabilities to enable this in your search box. The UI already has this feature implemented, and you already created the required indexes, but it doesn’t show up because the API is sending back no results. Open up the aggregation builder again to build a new pipeline. Start with a $search stage again, and use the following. Note how this $search stage uses the `autocomplete` stage that was created earlier. ```javascript { 'index': 'autocomplete', 'autocomplete': { 'query': "chic", 'path': 'name' }, 'highlight': { 'path': [ 'name' ] } } ``` In the preview panel, you should see some results containing the string “chic” in their name. That’s a lot of potential matches. For our application, we won’t want to return all possible matches. Instead, we’ll only take the first five. To do so, a $limit stage is used to limit the results to five. Click on Add Stage, select $limit from the dropdown, and replace `number` with the value `5`. ![The autocomplete aggregation pipeline in Compass Excellent! Now we only have five results. Since this request will be executed on each keypress, we want it to be as fast as possible and limit the required bandwidth as much as possible. A $project stage can be added to help with this—we will return only the ‘name’ field instead of the full documents. Click Add Stage again, select $project from the dropdown, and use the following JSON. ```javascript { 'name': 1, 'highlights': { '$meta': 'searchHighlights' } } ``` Note that we also added a new field named `highlights`. This field returns the metadata provided to us by Atlas Search. You can find much information in this metadata, such as each item's score. This can be useful to sort the data, for example. Now that you have a working aggregation pipeline, you can use it in your application. In the file _backend/index.js_, look for the route that listens for GET requests on `/autocomplete/:query`. After the `TODO` comment, add the following code to execute your aggregation pipeline. Don’t forget to replace the hard-coded query with `req.params.query`. You can export the pipeline directly from Compass or use the following code snippet. ```javascript // TODO: Insert the autocomplete functionality here results = await itemCollection.aggregate( { '$search': { 'index': 'autocomplete', 'autocomplete': { 'query': req.params.query, 'path': 'name' }, 'highlight': { 'path': [ 'name' ] } } }, { '$limit': 5 }, { '$project': { 'name': 1, 'highlights': { '$meta': 'searchHighlights' } } } ]).toArray(); /** End */ ``` Go back to your application, and test it out to see the new autocomplete functionality. ![The final application in action And look at that! Your site now offers a much better experience to your developers with very little additional code. ## Add custom scoring to adjust search results When delivering results to your users, you might want to push some products forward. Altas Search can help you promote specific results by giving you the power to change and tweak the relevance score of the results. A typical example is to put the currently on sale items at the top of the search results. Let’s do that right away. In the _backend/index.js_ file, replace the database query for the `/search/:query` route again to use the following aggregation pipeline. ```javascript /** TODO: Update this to use Atlas Search */ results = await itemCollection.aggregate( { $search: { index: 'default', compound: { must: [ {text: { query: req.params.query, path: ["name", "brand", "category", "tags"] }}, {exists: { path: "price_special", score: { boost: { value: 3 } } }} ] } } } ]).toArray(); /** End */ ``` This might seem like a lot; let’s look at it in more detail. ```javascript { $search: { index: 'default', compound: { must: [ {...}, {...} ] } } } ``` First, we added a `compound` object to the `$search` operator. This lets us use two or more operators to search on. Then we use the `must` operator, which is the equivalent of a logical `AND` operator. In this new array, we added two search operations. The first one is the same `text` as we had before. Let’s focus on that second one. ```javascript { exists: { path: "price_special", score: { boost: { value: 3 } } } ``` Here, we tell Atlas Search to boost the current relevance score by three if the field `price_special` exists in the document. By doing so, any document that is on sale will have a much higher relevance score and be at the top of the search results. If you try your application again, you should notice that all the first results have a sale price. ## Add fuzzy matching Another common feature in product catalog search nowadays is fuzzy matching. Implementing a fuzzy matching feature can be somewhat complex, but Atlas Search makes it simpler. In a `text` search, you can add the `fuzzy` field to specify that you want to add this capability to your search results. You can tweak this functionality using [multiple options, but we’ll stick to the defaults for this application. Once again, in the _backend/index.js_ file, change the `search/:query` route to the following. ```javascript /** TODO: Update this to use Atlas Search */ results = await itemCollection.aggregate( { $search: { index: 'default', compound: { must: [ {text: { query: req.params.query, path: ["name", "brand", "category", "tags"], fuzzy: {} }}, {exists: { path: "price_special", score: { boost: { value: 3 } } }} ] } } } ]).toArray(); /** End */ ``` You’ll notice that the difference is very subtle. A single line was added. ```javascript fuzzy: {} ``` This enables fuzzy matching for this `$search` operation. This means that the search engine will be looking for matching keywords, as well as matches that could differ slightly. Try out your application again, and this time, try searching for `chickn`. You should still be able to see some results. A fuzzy search is a process that locates web pages that are likely to be relevant to a search argument even when the argument does not exactly correspond to the desired information. ## Summary To ensure that your website is successful, you need to make it easy for your users to find what they are looking for. In addition to that, there might be some products that you want to push forward. Atlas Search offers all the necessary tooling to enable you to quickly add those features to your application, all by using the same MongoDB Query API you are already familiar with. In addition to that, there’s no need to maintain a second server and synchronize with a search engine. All of these features are available right now on [MongoDB Atlas. If you haven’t already, why not give it a try right now on our free-to-use clusters?
md
{ "tags": [ "Atlas", "JavaScript" ], "pageDescription": "In this tutorial, we’ll make a website that has a simple text search and use Atlas Search to promote some of our products on sale.", "contentType": "Tutorial" }
Building an E-commerce Content Catalog with Atlas Search
2024-05-20T17:32:23.501Z
devcenter
https://www.mongodb.com/developer/languages/csharp/getting-started-with-mongodb-atlas-and-azure-functions-using-net
created
# Getting Started with MongoDB Atlas and Azure Functions using .NET and C# So you need to build an application with minimal operating costs that can also scale to meet the growing demand of your business. This is a perfect scenario for a serverless function, like those built with Azure Functions. With serverless functions you can focus more on the application and less on the infrastructure and operations side of things. However, what happens when you need to include a database in the mix? In this tutorial we'll explore how to create a serverless function with Azure Functions and the .NET runtime to interact with MongoDB Atlas. If you're not familiar with MongoDB, it offers a flexible data model that can be used for a variety of use cases while being integrated into most application development stacks with ease. Scaling your MongoDB database and Azure Functions to meet demand is easy, making them a perfect match. ## Prerequisites There are a few requirements that must be met prior to starting this tutorial: - The Azure CLI installed and configured to use your Azure account. - The Azure Functions Core Tools installed and configured. - .NET or .NET Core 6.0+ - A MongoDB Atlas deployed and configured with appropriate user rules and network rules. We'll be using the Azure CLI to configure Azure and we'll be using the Azure Functions Core Tools to create and publish serverless functions to Azure. Configuring MongoDB Atlas is out of the scope of this tutorial so the assumption is that you've got a database available, a user that can access that database, and proper network access rules so Azure can access your database. If you need help configuring these items, check out the MongoDB Atlas tutorial to set everything up. ## Create an Azure Function with MongoDB Support on Your Local Computer We're going to start by creating an Azure Function locally on our computer. We'll be able to test that everything is working prior to uploading it to Azure. Within a command prompt, execute the following command: ```bash func init MongoExample ``` The above command will start the wizard for creating a new Azure Functions project. When prompted, choose **.NET** as the runtime since our focus will be C#. It shouldn’t matter if you choose the isolated process or not, but we won’t be using the isolated process for this example. With your command prompt, navigate into the freshly created project and execute the following command: ```bash func new --name GetMovies --template "HTTP trigger" ``` The above command will create a new "GetMovies" Function within the project using the "HTTP trigger" template which is quite basic. In the "GetMovies" Function, we plan to retrieve one or more movies from our database. While it wasn't a requirement to use the MongoDB sample database **sample_mflix** and sample collection **movies** in this project, it will be referenced throughout. Nothing we do can't be replicated using a custom database or collection. At this point we can start writing some code! Since MongoDB will be one of the highlights of this tutorial, we need to install it as a dependency. Within the project, execute the following from the command prompt: ```bash dotnet add package MongoDB.Driver ``` If you're using NuGet there are similar commands you can use, but for the sake of this example we'll stick with the .NET CLI. Because we created a new Function, we should have a **GetMovies.cs** file at the root of the project. Open it and replace the existing code with the following C# code: ```csharp using System; using System.IO; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Logging; using Newtonsoft.Json; using MongoDB.Driver; using System.Collections.Generic; using MongoDB.Bson.Serialization.Attributes; using MongoDB.Bson; using System.Text.Json.Serialization; namespace MongoExample { BsonIgnoreExtraElements] public class Movie { [BsonId] [BsonRepresentation(BsonType.ObjectId)] public string? Id { get; set; } [BsonElement("title")] [JsonPropertyName("title")] public string Title { get; set; } = null!; [BsonElement("plot")] [JsonPropertyName("plot")] public string Plot { get; set; } = null!; } public static class GetMovies { public static Lazy lazyClient = new Lazy(InitializeMongoClient); public static MongoClient client = lazyClient.Value; public static MongoClient InitializeMongoClient() { return new MongoClient(Environment.GetEnvironmentVariable("MONGODB_ATLAS_URI")); } [FunctionName("GetMovies")] public static async Task Run( [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, ILogger log) { string limit = req.Query["limit"]; IMongoCollection moviesCollection = client.GetDatabase("sample_mflix").GetCollection("movies"); BsonDocument filter = new BsonDocument{ { "year", new BsonDocument{ { "$gt", 2005 }, { "$lt", 2010 } } } }; var moviesToFind = moviesCollection.Find(filter); if(limit != null && Int32.Parse(limit) > 0) { moviesToFind.Limit(Int32.Parse(limit)); } List movies = moviesToFind.ToList(); return new OkObjectResult(movies); } } } ``` There's a lot happening in the above code, but we're going to break it down so it makes sense. Within the namespace, you'll notice we have a *Movie* class: ```csharp [BsonIgnoreExtraElements] public class Movie { [BsonId] [BsonRepresentation(BsonType.ObjectId)] public string? Id { get; set; } [BsonElement("title")] [JsonPropertyName("title")] public string Title { get; set; } = null!; [BsonElement("plot")] [JsonPropertyName("plot")] public string Plot { get; set; } = null!; } ``` The above class is meant to map our local C# objects to fields within our documents. If you're using the **sample_mflix** database and **movies** collection, these are fields from that collection. The class doesn't represent all the fields, but because the *[BsonIgnoreExtraElements]* is included, it doesn't matter. In this case only the present class fields will be used. Next you'll notice some initialization logic for our database: ```csharp public static Lazy lazyClient = new Lazy(InitializeMongoClient); public static MongoClient client = lazyClient.Value; public static MongoClient InitializeMongoClient() { return new MongoClient(Environment.GetEnvironmentVariable("MONGODB_ATLAS_URI")); } ``` We're using the *Lazy* class for lazy initialization of our database connection. This is done outside the runnable function of our class because it is not efficient to establish connections on every execution of our Azure Function. Concurrent connections to MongoDB and pretty much every database out there are finite, so if you have a large scale Azure Function, things can go poorly real quick if you're establishing a connection every time. Instead, we establish connections as needed. Take note of the *MONGODB_ATLAS_URI* environment variable. We'll obtain that value soon and we'll make sure it gets exported to Azure. This brings us to the actual logic of our Azure Function: ```csharp string limit = req.Query["limit"]; IMongoCollection moviesCollection = client.GetDatabase("sample_mflix").GetCollection("movies"); BsonDocument filter = new BsonDocument{ { "year", new BsonDocument{ { "$gt", 2005 }, { "$lt", 2010 } } } }; var moviesToFind = moviesCollection.Find(filter); if(limit != null && Int32.Parse(limit) > 0) { moviesToFind.Limit(Int32.Parse(limit)); } List movies = moviesToFind.ToList(); return new OkObjectResult(movies); ``` In the above code we are accepting a l*imit* variable from the client who executes the Function. It is not a requirement and doesn't need to be called *limit*, but it will make sense for us. After getting a reference to the database and collection we wish to use, we define the filter for the query we wish to run. In this example we are attempting to return only documents for movies that were released between the year 2005 and 2010. We then use that filter in the *Find* operation. Since we want to be able to limit our results, we check to see if *limit* exists and we make sure it has a value that we can work with. If it does, we use that value as our limit. Finally we convert our result set to a *List* and return it. Azure hands the rest for us! Want to test this Function locally before we deploy it? First make sure you have your Atlas URI string and set it as an environment variable on your local computer. This can be obtained through [the MongoDB Atlas Dashboard. The best place to add your environment variable for the project is within the **local.settings.json** file like so: ```json { "IsEncrypted": false, "Values": { // OTHER VALUES ... "MONGODB_ATLAS_URI": "mongodb+srv://:@.170lwj0.mongodb.net/?retryWrites=true&w=majority" }, "ConnectionStrings": {} } ``` The **local.settings.json** file doesn't get sent to Azure, but we'll handle that later. With the environment variable set, execute the following command: ```bash func start ``` If it ran successfully, you'll receive a URL to test with. Try adding a limit and see the results it returns. At this point we can prepare the project to be deployed to Azure. ## Configure a Function Project in the Cloud with the Azure CLI As mentioned previously in the tutorial, you should have the Azure CLI. We're going to use it to do various configurations within Azure. From a command prompt, execute the following: ```bash az group create --name --location ``` The above command will create a group. Make sure to give it a name that makes sense to you as well as a region. The name you choose for the group will be used for the next steps. With the group created, execute the following command to create a storage account: ```bash az storage account create --name --location --resource-group --sku Standard_LRS ``` When creating the storage account, use the same group as previous and provide new information such as a name for the storage as well as a region. The storage account will be used when we attempt to deploy the Function to the Azure cloud. The final thing we need to create is the Function within Azure. Execute the following: ```bash az functionapp create --resource-group --consumption-plan-location --runtime dotnet --functions-version 4 --name --storage-account ``` Use the regions, groups, and storage accounts from the previous commands when creating your function. In the above command we're defining the .NET runtime, one of many possible runtimes that Azure offers. In fact, if you want to see how to work with MongoDB using Node.js, check out this tutorial on the topic. Most of the Azure cloud is now configured. We'll see the final configuration towards the end of this tutorial when it comes to our environment variable, but for now we're done. However, now we need to link the local project and cloud project in preparation for deployment. Navigate into your project with a command prompt and execute the following command: ```bash func azure functionapp fetch-app-settings ``` The above command will download settings information from Azure into your local project. Just make sure you've chosen the correct Function name from the previous steps. We also need to download the storage information. From the command prompt, execute the following command: ```bash func azure storage fetch-connection-string ``` After running the above command you'll have the storage information you need from the Azure cloud. ## Deploy the Local .NET Project as a Function with Microsoft Azure We have a project and that project is linked to Azure. Now we can focus on the final steps for deployment. The first thing we need to do is handle our environment variable. We can do this through the CLI or the web interface, but for the sake of quickness, let's use the CLI. From the command prompt, execute the following: ```bash az functionapp config appsettings set --name --resource-group --settings MONGODB_ATLAS_URI= ``` The environment variable we're sending is the *MONGODB_ATLAS_URI* like we saw earlier. Maybe sure you add the correct value as well as the other related information in the above command. You'd have to do this for every environment variable that you create, but luckily this project only had the one. Finally we can do the following: ```bash func azure functionapp publish ``` The above command will publish our Azure Function. When it's done it will provide a link that you can access it from. Don't forget to obtain a "host key" from Azure before you try to access your Function from cURL, the web browser or similar otherwise you'll likely receive an unauthorized error response. ```bash curl https://.azurewebsites.net/api/GetMovies?code= ``` The above cURL is an example of what you can run, just swap the values to match your own. ## Conclusion You just saw how to create an Azure Function that communicates with MongoDB Atlas using the .NET runtime. This tutorial explored several topics which included various CLI tools, efficient database connections, and the querying of MongoDB data. This tutorial could easily be extended to do more complex tasks within MongoDB such as using aggregation pipelines as well as other basic CRUD operations. If you're looking for something similar using the Node.js runtime, check out this other tutorial on the subject. With MongoDB Atlas on Microsoft Azure, developers receive access to the most comprehensive, secure, scalable, and cloud–based developer data platform in the market. Now, with the availability of Atlas on the Azure Marketplace, it’s never been easier for users to start building with Atlas while streamlining procurement and billing processes. Get started today through the Atlas on Azure Marketplace listing.
md
{ "tags": [ "C#", ".NET", "Azure", "Serverless" ], "pageDescription": "Learn how to build scalable serverless functions on Azure that communicate with MongoDB Atlas using C# and .NET.", "contentType": "Tutorial" }
Getting Started with MongoDB Atlas and Azure Functions using .NET and C#
2024-05-20T17:32:23.501Z