The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Clarification on the Differences Between "Schedules" and "Web API Endpoints" / How to use PostgreSQL
I have two inquiries.
1. I would like to clarify the exact differences between "Schedules (Run a process)" and "Web API Endpoints" on AI Hub.
From my understanding, both are ultimately used to execute models created in AI Studio.
For visualization purposes, platforms like Grafana or Panopticon utilize Web API Endpoints, which makes sense, as this is the common way APIs are typically used.
Given this, I am curious to know the specific need for "Schedules." Is it correct to assume that their sole purpose is to run processes that collect and store data?
Additionally, I would appreciate it if you could share how others typically use this feature.
2.It seems that PostgreSQL is automatically installed during the AI Hub installation process.
However, I am unsure how to use PostgreSQL within Docker.
Please let me know how to use PostgreSQL within Docker.
Could you please provide guidance on how to configure PostgreSQL in this context,
so that it can be effectively used by AI Hub and AI Studio?
ps. I am using AI Hub 2024.3
1. I would like to clarify the exact differences between "Schedules (Run a process)" and "Web API Endpoints" on AI Hub.
From my understanding, both are ultimately used to execute models created in AI Studio.
For visualization purposes, platforms like Grafana or Panopticon utilize Web API Endpoints, which makes sense, as this is the common way APIs are typically used.
Given this, I am curious to know the specific need for "Schedules." Is it correct to assume that their sole purpose is to run processes that collect and store data?
Additionally, I would appreciate it if you could share how others typically use this feature.
2.It seems that PostgreSQL is automatically installed during the AI Hub installation process.
However, I am unsure how to use PostgreSQL within Docker.
Please let me know how to use PostgreSQL within Docker.
Could you please provide guidance on how to configure PostgreSQL in this context,
so that it can be effectively used by AI Hub and AI Studio?
ps. I am using AI Hub 2024.3
0
Best Answers
-
MartinLiebig Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,533 RM Data ScientistHi,
Endpoints are REST webserivces. They are used for small process executions, which return their result in max a few seconds. Their answer time can go down to a few dozens of ms. The endpoints get triggered from the outside. The typical use case for this is scoring with a model.
Schedules are executed periodically. They are executed in the normal JobAgents architecture and are thus suitable for large jobs which need a lot of resources and may take long. They are not triggered from the outside.
Typical jobs are model retraining or large data prep jobs.
On the postgres: I think the postgres in the docker is only used for AIHub internal things, and you shouldn't worry or use it.
Cheers,
Martin- Sr. Director Data Solutions, Altair RapidMiner -
Dortmund, Germany0 -
MartinLiebig Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,533 RM Data Scientist
Hi,
yes Web-APIs or even a stand-alone RTSA would be the way to go. Scheduler is only for heavy duty tasks which take a few minutes or hours.
And yes, Grafana or Pano are using the Web-API endpoints as well.
Cheers,
Martin
- Sr. Director Data Solutions, Altair RapidMiner -
Dortmund, Germany0
Answers
Hello, @MartinLiebig
Thank you for helping me better understand the difference between endpoints and schedules. I truly appreciate it.
For example, if I wanted to use a defect prediction model in a smart factory to perform real-time predictions through Grafana, this would involve making api calls within the AI Hub.
In such a case, would it be better to use an endpoint, or should I consider using a scheduler?
Additionally, if I were to use Grafana or Panopticon, both of which can utilize endpoints, would accessing them via an endpoint be the method Altair recommends?
Best,
Seulbi