-
-
Notifications
You must be signed in to change notification settings - Fork 764
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FR] Test statistics #5995
Comments
@martonmiklos some very interesting ideas here, it aligns somewhat with our proposed expansion of MES functionality. Some of the particulars here would be good as a plugin, but there is certainly room for expansion and improvement on the existing functionality. The stock item testing framework has received very little development over the last couple of years.
This could be added in the metadata for the test result itself, as there may be a wide range of data that different companies want to implement. However if there are some generic data which we can agree would be generally useful, happy to add those in.
The existing implementation keeps all recorded test results, so you can already determine how many pass / fail results there are against a particular test case. Unless I am misreading this? In general, I am very keen to expand functionality here and provided some measure of statistical analysis would be a great feature. If you have some specific implementation ideas that you want to suggest (in terms of fields, API access, etc) then I'd suggest making a PR proposal and we can work from there? |
Yes, but I would like to "cache" the retest flag to being able to query it effectively. i.e. if I want to make a retest statistics from the past let's say 2 months (10 parts, 3000 test results) then it is just a matter of a count in SQL. If it is not cached then we would need to go through on each stock item, count the individual results per test template etc...
Sounds like a plan! I missed one thing what was in my mind previously: the test status view for a build output: This would be a view where the test templates would be listed for the part and for each test template the tested/passed/failed/retested count would be shown for a given build order. With this view we could easily get an overview of the build outputs progressing on the tests. |
This issue seems stale. Please react to show this is still important. |
Not stale |
This issue seems stale. Please react to show this is still important. |
I am still working on it |
I started to implement new API endpoints for generating a pass/fail statistic table broken down by part, build and test template. As the test results are belong under the stock I added the following API endpoints: I added stock/test-statistics/[by-part/by-build/by-test-template] endpoints. What do you think about this approach? On the other hand I am struggling with the filtering: I would like to get the test statistics to be filterable by date. But the filtering UI did not got available in the rest UI: I do not even know that the approach what I am trying to do is the correct way to implement. I appreciate any feedback! |
@martonmiklos I have not had time to review this fully but a few points I'd like to make initially:
You need to use FilterSet - anything you add in the |
Done I changed to use api/stock/test-statistics/by-part and api/stock/test-statistics/by-build API base urls.
Yes I tried to replicate that: created a very basic FilterSet with two additional filter fields on the finished date: And attaching it with the filterset_class: However it is not shown in the API UI and in the schema. |
@martonmiklos that looks correct to me - but it does not show up in the DRF browsable API? |
@SchrodingersGat yes: I am under the impression that I go against the DJango's the queryset/filterset paradigm. I expose the queryset to the StockItemTestResult here: However what I do "back in the model" after the TestStatistic::get -> TestStatisticSerializer::to_representation -> StockItemTestResult::part/build_test_statistics is the following: I do queries by part/builds to retrive StockItemTestResults and then count them with a manually built query: If I think correctly I should expose these queries built here with the TestStatistic::queryset somehow and then only apply the passed/failed/part/build filters here. I think I could workaround the whole issue by flipping everything upside down and moving most of the logic to the client side: query the test templates by a part then query the passed/failed results by part for each template. |
I'm at a loss - your implementation looks fine, not sure why it is not working as is :| |
Well I could access the query parameters and "smuggle" down to the model method, but it would be still good to expose them to the API. Originally I wanted to create a "weekly test digest email" with an external tool and it would be good if the API could be generated. |
Okay, I digged a bit deeper: the issue is caused by "awkward" request handling. If I bypass the serializer I got the filters appear: However I do not know how this could be solved. Maybe @matmair could have some recommendation. |
On a quick look through this seems to be correct, I will try to debug it before the release window closes |
It is not that urgent, I have not even PR-ed it yet as I wanted to include a few more related stuff:
|
@SchrodingersGat , @matmair What icon shall we put to the "Test statistics" menu? |
@martonmiklos TBH I think the current icon is fine? Do we really need another one? |
@matmair having the same icon as the Test templates menu item above was somewhat strange for me. But I am fine with that. |
I have no preference either way |
@martonmiklos for the existing user interface you could use something like:
for the react interface, which users tabler.io icons, perhaps:
|
@SchrodingersGat Cool, thanks, I will use the last two. One more technical question: I just discovered that the response fields does not get propagated to the API descriptor: Can you give some guidance how to do it? I saw some @extend_schema magic like this over the code:
but I have not find example for adding fields. Maybe I am thinking in the wrong direction? |
You can do this like this: InvenTree/src/backend/InvenTree/machine/api.py Lines 83 to 85 in 7e9d2f7
|
Hi @matmair , @SchrodingersGat May I ask for some React/mantine datatable help? I have tried to alter the table's columns in the dataFormatter callback, but if I do that I got an error: |
Can you please show us how you tried to modify the table columns? I can only guess you tried to mutate the return value of the useMemo, but that should be immutable, you would need to put that into a useState and use the second return value function. And regarding your error, I think that's something related on the mantine datatable side, so don't worry. It only affects you if you loop over a list to render elements you you do not provide a unique key prop for the component you render in the loop. Edit: you may also want to take a look at how the parametric part table is done, because it has dynamic columns. |
@wolflu05 many thanks for your quick response, I really appreciate it! I forgot to include the link to my code, here is it, I have not used any useState/useMemo:
|
Have a look at https://github.com/inventree/InvenTree/blob/master/src/frontend/src/tables/part/ParametricPartTable.tsx And yes, how you do it will not work. That's not how react works. But the linked example should pretty much cover what you are trying to do. |
Yeah I trying to get into react/typescript in "monkey style" and it is hard sometimes. 🤣 I will doing some polishing and then I will PR it. |
This issue seems stale. Please react to show this is still important. |
Not stale |
* fix style issue (inventree#7768) * Pin OAS action (inventree#7767) * [PUI] Bug fix for API forms (inventree#7758) * [PUI] Bug fix for API forms - Ensure that "blank" data does not get overriden with previous fields - Better logic for form data fetching * Playwright test fixes * Add extra fields to BOM export (inventree#7775) * Add extra fields to BOM export * Bump API version * Fix permissions for importing data (inventree#7776) * Added test statistics (inventree#7164) * Added test statistics Fixed inventree#5995 * Bump API version * Fix javascript varible scopes * Fix javascript exports * Remove duplicated import * Add files modified by the pre-commit scripts * Move test statistics API urls to a separate endpoint * Merge test-statistics urls * Undo unrelated changes * Formatting fix * Fix API urls for test statistics in PUI * Fix prefixing in test statistic functions --------- Co-authored-by: Oliver <[email protected]> * Add more names * split build and publish * add attestation and SBOM * format file * Add toplevel permissions --------- Co-authored-by: Oliver <[email protected]> Co-authored-by: Miklós Márton <[email protected]>
Please verify that this feature request has NOT been suggested before.
Problem statement
It would be great if statistics could be made from test results easily.
I am not only thinking about from the part perspective view like suggested in the #821, but from other, higher perspective (per test templates for e.g.) or in general (see later).
What I am thinking about:
Test failure type analysis:
As I read about the test reporting documentation I realized that we use somewhat differently the test results concepts what it was originally designed:
We run test sequences (composed from several test steps). If any of the test steps failing the sequence terminated immediately and get logged in the following manner: I store the failed step name in the value field. In the case if all steps passing a passed result will be logged w/o any value.
I do not know if this InvenTree usage approach should be follow-able, but logging each step as a separate test result would not make much sense in our case (we have looped test steps with a few K iterations for e.g.).
Anyway: it would be good if the test results count could be grouped and counted by the failure reason. If we use the value for this calculation or we introduce a failure reason field for the test run is up for discussion. The calculation could be filtered by test run date (time interval), part type or test template.
Retest statistics
Sometimes it is important to know how "smoothly" going the testing at the manufacturing floor. It would be great if the test results could include a retest field which would be zero for the first test results saved for a specific stock item for the given test template, and would be set to true in the case if the test is executed again for the same stock item. From this retest flag we can make some statistics about the retest metrics to see how stable the testing goes on the floor.
Test equipment statistics
It would be great if the test equipment information (test station number for e.g.) could be logged to each test result and some metrics could be created to see how much the specific stations are utilizeed. (See next point.)
Execution time statistics
There are tests which require operator intervention which contributes to the testing processes capacity. It would be great if the test execution time (start/stop) could be stored for each test result, and some statistics (min, max, average, distribution of the test time) could be created per test template (with filtering capability on time intervals, or specific build orders). It would be also useful to accumulate the test times grouped by test stations to see the given stations utilization.
Suggested solution
First I would like to discuss if storing more data (test time, station info, failure step) makes sense for the fellow developers. I know that our "InvenTree test adoptation" was went in a somewhat different way as InvenTree used at different places, but I think these additions would makes sense at many other users.
If we agree upon something in this area I think this is something what I can help to move forward.
I know that the timing for new UI features is not the best at the moment (in the meantime of the UI switching), however for my own usage at first I do not need web UI: I would like to create a daily/weekly test statistics digest e-mail which would be created by an external tool with the usage of the InvenTree API.
The API design/discussion/implementation is something what I think I could be capable to help with.
Describe alternatives you've considered
Examples of other systems
I am not aware of any.
Do you want to develop this?
The text was updated successfully, but these errors were encountered: