Introduction
Glossary
Key points to ponder
Datura AI employs a wide range of technical and business terminology that can be challenging to comprehend from a developer’s perspective. This includes references to documentation standards utilized across the API and in routine business operations.
The objective of this document is to deliver a comprehensive glossary of Datura AI terminology, offering clear explanations to help users gain a deeper understanding of the product, API, and its functionalities.
Term | Definition |
---|---|
Datura AI | An artificial intelligence platform providing advanced tools for data analysis and automation. |
Model Training | The process of teaching the AI model to recognize patterns from data, improving prediction accuracy. |
Data Pipeline | A series of data processing steps to clean, transform, and load data into AI models for analysis. |
Prediction API | An API that enables users to submit data to be processed by Datura AI models for predictive insights. |
Inference | The process where a trained model makes predictions based on new input data. |
Feature Engineering | The process of selecting, modifying, or creating new input features to improve model performance. |
Training Dataset | A set of data used to train an AI model, typically labeled with correct outcomes. |
Test Dataset | A dataset used to evaluate the performance of an AI model after it has been trained. |
API Endpoint | A specific function within Datura AI that allows developers to interact with AI models or data via HTTP requests. |
Authentication Token | A secure, time-limited key used to authenticate users or applications accessing the Datura AI API. |
Batch Processing | A method of processing large sets of data in chunks, typically used for model training or inference. |
Real-Time Processing | The processing of data as it is received, enabling instant predictions and responses. |
API Rate Limit | The maximum number of API requests allowed in a specific time period to prevent abuse or overuse of resources. |
Data Normalization | The process of scaling data to a standard range or distribution for better model performance. |
API Key | A unique identifier used to authenticate requests to the Datura AI API, required for access. |
Webhook | A user-defined HTTP callback triggered by specific events, such as a model’s prediction being completed. |
Model Deployment | The process of integrating a trained model into a live environment where it can provide real-time predictions. |
Monitoring | The practice of tracking the performance and accuracy of AI models during and after deployment. |
Error Handling | The process of managing and responding to errors or exceptions that occur during API interaction. |
Data Labeling | The process of assigning correct labels to raw data, enabling supervised learning for AI models. |
Hyperparameters | Configurable parameters used during model training that control the learning process, such as learning rate and batch size. |
Precision | A measure of the accuracy of the positive predictions made by the model. |
Recall | A measure of the ability of the model to detect all relevant positive instances. |
F1 Score | A balance between precision and recall, providing a single metric for model performance. |
API Documentation | A guide that details the structure, functionality, and usage of the Datura AI API for developers. |
Authentication Flow | The sequence of steps in the API for verifying and authorizing users to access specific resources. |
Bittensor | A decentralized protocol for training and deploying AI models, incentivizing users through a cryptocurrency system. |
Bittensor Subnet 22 | A specific network within the Bittensor protocol, designed to host and validate AI models. |
AI Search | A feature of Datura AI that leverages advanced algorithms to enable fast and efficient search capabilities across large datasets. |
AI Modeling Platforms | Platforms like Nova, Orbit, Horizon, X, and Web that integrate with Datura AI to create, deploy, and manage machine learning models. |
Nova | An AI modeling platform integrated with Datura AI, providing tools for model development and deployment. |
Orbit | Another AI modeling platform supported by Datura AI, designed for scalable machine learning workflows. |
Horizon | A next-gen platform for AI model management and analysis, integrated with Datura AI for enhanced functionality. |
X (Twitter) AI | A platform-based integration with Datura AI allowing for real-time analysis and predictions based on social media data. |
Web AI | A set of AI tools integrated into web-based applications, supported by Datura AI’s core infrastructure. |
Datura Validator | A high-performance node within the Datura AI ecosystem that contributes computing resources, data, and intelligence to the Bittensor network. |
Datura Validator Monetization | The process by which Datura Validator earns TAO cryptocurrency by providing open-source intelligence, computing power, and data to the Bittensor protocol. |
High-Performance Servers | Powerful computational infrastructure used by Datura Validator for rapid data processing and model execution. |
Security Protocols | Advanced measures in place to protect the Datura Validator infrastructure and ensure 100% uptime and reliability. |
Subnets Activity | A tool designed to analyze updates and activity within Bittensor subnets, providing insights on network health and performance. |