the microsevenx v1.0.1 sz prototype completed

There are two phases to getting the microsevenX (computer vision and ML) software done in the official release version.

Phase one – generate an ML (machine learning) model

1 Machine learning, sampling the object. Give the object annotation

2. Run the machine learning (yolov8), and the machine will generate the model. It requires a high-profile computer that is compatible with Nvidia Cuda. We only collect 1200 samples from Rutgers data.

3. The trained ML model will be used for the software (temp name: microsevenX)

Phase two – develop software (microsevenX), a computer vision application automation tool

1 hardware: a microseven camera, a working computer.

2 requires developing tools: SQL database, python, c compiler

3. GUI design and request from users

a functionality: scanning clip or RTSP (real-time stream protocol) video. Save the scanned data in the database SQL.

b. Find the peak points on the coordinate (x,y). Hover your mouse over the curve and click on the pointed coordinate. The playback image and video will be displayed instantly in seconds.

c. Data saved on the coordinate will be permanently used for the research.

d. Users can get the instance message of the sz point after scanning the clip or RTSP video via phone or email.

e. Software can be compatible with Windows OS or Mac OS and on a working computer.

Leave a Comment

Your email address will not be published. Required fields are marked *