https://edgertronic.mywikis.wiki/w/api.php?action=feedcontributions&user=Tfischer&feedformat=atomedgertronic high speed video camera - User contributions [en]2024-03-28T12:31:21ZUser contributionsMediaWiki 1.35.13https://edgertronic.mywikis.wiki/w/index.php?title=Edgertronic_camera_software_recovery&diff=5711Edgertronic camera software recovery2024-03-22T22:01:45Z<p>Tfischer: v2.5.3rc31</p>
<hr />
<div>A bricked camera is when the software on the '''micro SD card''' that runs the camera is no longer usable. The micro SD card is the small one that you normally don't remove in the recessed slot between the big SD card and the LEDs. If you lost power during a software update, you may have bricked your camera. If you think your camera is bricked for some other reason, please send an email to '''info@edgertronic.com''' with the details of what happened right before the camera stopped functioning correctly.<br />
<br />
= Get latest camera software =<br />
<br />
Download the microSD card image file:<br />
<br />
* <span style="color:purple">'''[https://www.edgertronic.com/releases/v2.5.3rc31/sdcard_image/sdcard.v2.5.3rc31.img.zip v2.5.3rc31 SD card image]'''</span> <br />
<br />
Unzip the downloaded file to get the microSD card image file.<br />
<br />
= Removing the micro SD card =<br />
<br />
The micro SD card is in the slot next to the big SD card. The micro SD card is recessed and in a spring loaded slot. You remove the micro SD card by gently pushing the card farther into the camera (just like you do with the big SD card) and then the micro SD card will pop out. You may find a paperclip is the right size to allow you to press the micro SD card farther into the camera).<br />
<br />
Be sure to press the micro SD card straight in (meaning perpendicular to the camera back) otherwise the card may hang up on the edge of the slot.<br />
<br />
'''It is important that you insert the micro-SD card in the correct orientation. The micro SD label faces the system and camera LEDs and the gold contacts face the big SD card. Incorrectly inserting or forcing the micro SD card will cause damage to the camera that is not covered under warranty.''' <br />
<br />
= Unbrick using Windows =<br />
<br />
== Set Up ==<br />
<br />
Before we can actually write to the micro SD card from a windows machine you need to download a Windows program that can image the contents of the image file directly over the entire micro SD card. There are several such programs to choose from.<br />
<br />
== Disk Imaging ==<br />
<br />
'''Using [https://www.balena.io/etcher BalenaEtcher] (pc)'''<br />
<br />
For years, we have been using BalenaEtcher on a Mac computer to burn the micro SD cards. I recently realized that BalenaEtcher runs on Windows as well. I tried it out using the following steps and was successful on a Windows 10 computer.<br />
<br />
# [https://www.balena.io/etcher BalenaEtcher Download] and install ''Download for Windows (x86|x64)''. I tested v1.5.116.<br />
# Install a micro SD card into your PC. I pressed cancel when asked if I wanted to format the card.<br />
# Download the zip file from inside the sdcard_image directory as described above.<br />
# Run BalenaEtcher<br />
## Select ''Flash from file''. The zip file you downloaded will likely be in the cleverly named Downloads directory on your Windows PC.<br />
## Select ''Select target''. I had to show hidden files. I installed an 8GB micro SD card, so I selected the device with SD in the name and a size 7.88GB. For some reason Balena had incorrectly identified the SD card as a system drive, so I had to click ''Yes, I'm sure'. If you are uncertian, exit Balena, remove the micro SD card, and run Balena again to see which device is no long in the select target list.<br />
## Wait until Balena reports ''Flash Complete!'' before removing the micro SD card.<br />
<br />
= Unbrick using Mac O.S.=<br />
<br />
'''Using [https://www.balena.io/etcher BalenaEtcher] (mac)'''<br />
<br />
For years, we have been using BalenaEtcher on a Mac computer to burn the micro SD cards.<br />
<br />
# [https://www.balena.io/etcher BalenaEtcher Download] and install ''Download for MAC''. I tested v1.5.86.<br />
# Install a micro SD card into your PC. I pressed cancel when asked if I wanted to format the card.<br />
# Download the zip file from inside the sdcard_image directory as described above.<br />
# Run BalenaEtcher<br />
## Select ''Flash from file''. The zip file you downloaded will likely be in the cleverly named Downloads directory on your Windows PC.<br />
## Select ''Select target''. I had to show hidden files. I installed an 8GB micro SD card, so I selected the device with SD in the name and a size 7.88GB. For some reason Balena had incorrectly identified the SD card as a system drive, so I had to click ''Yes, I'm sure'. If you are uncertian, exit Balena, remove the micro SD card, and run Balena again to see which device is no long in the select target list.<br />
## Wait until Balena reports ''Flash Complete!'' before removing the micro SD card.<br />
<br />
= Unbrick using Ubuntu =<br />
<br />
* Plug your micro SD card into the Ubuntu computer using the appropriate adaptor. Find the dev name for your micro SD card using command:<br />
<br />
<pre style="background:#d6e4f1"><br />
df<br />
</pre><br />
<br />
* Unmount the file system on the micro SD card using ''N'' where N is the number of the disk taken from the above command output i.e. for example, if the dev name is /dev/sdb1, replace N=1 in the command below:<br />
<br />
<pre style="background:#d6e4f1"><br />
umount /dev/sdbN<br />
</pre><br />
<br />
* Use the '''dd''' command to completely overwrite the contents of the microSD card. In the example below, the downloaded disk image is sdcard.20151216204804.img. Please change this as appropriate if you downloaded the image to a different location. Make sure to use the correct dev name. '''Suppose if the dev name is found /dev/sdb1, then use /dev/sdb in the 'dd' command (you need to ommit 'N') in the case of Ubuntu.'''<br />
<pre style="background:#d6e4f1"><br />
FILE=sdcard.20151216204804.img<br />
sudo dd bs=64M if=~/Downloads/$FILE of=/dev/sdb<br />
</pre><br />
<br />
* Now that the 'dd' has finished, run the 'sync' command and then unplug the microsd card from the Ubuntu system.<br />
<pre style="background:#d6e4f1"><br />
sync<br />
</pre><br />
<br />
= Reinstalling micro SD card into camera =<br />
<br />
[[Image:Inserting-micro-sd-card.jpg|300px|thumb|right]]<br />
<br />
Once you have an imaged micro SD insert it back into the camera. '''It is important that you insert the micro-SD card in the correct orientation. The micro SD label faces the system and camera LEDs and the gold contacts face the big SD card. Incorrectly inserting or forcing the micro SD card will cause damage to the camera that is not covered under warranty.''' Insert the micro SD card with the camera powered off. You can use a paperclip to gently push the micro SD card into the slot. Give the camera about a minute then the LEDs should be back on and the camera should update itself. If the image you used to update the camera was an older version of software you will need to conduct a software update manually after the camera finishes the re-image process.<br />
<br />
Simply take the newest software update (or desired software version's update) file and copy it directly onto the SD card(the big one), power on the camera and wait through the [[LEDs|LED]] “white pattern” as the camera updates.<br />
<br />
If the camera still does not work, try a [[Multi-function_button#Factory_reset|factory reset]].<br />
<br />
= Trying out a beta release =<br />
<br />
We are a rather open company. We use Open Source software. As much as practical, we make the camera's source code available. We work hard supporting CAMAPI so you can integrate the camera into your existing processes. We even make our buggy beta releases available for you to try out. Only we ask this one simple request in return. Please, please, please keep your fully tested micro SD card that came with the camera intact. Go buy another quality U10 class micro SD card to use when running the beta release software. That way, if the beta release causes more problems than it solves, you can simply swap out the micro SD card with the one that came with the camera and you are back in business.<br />
<br />
To see what beta release is available, browse to the [http://www.edgertronic.com/releases/ releases directory].<br />
<br />
Since you are going to be programming that brand new microSD card, first [[SDK_-_Developer_tricks#Extracting_sdcard.img_file_from_update_tarball|extract the SD card image]] from the beta release update tarball and then program the shiny new microSD card with the beta version of the software, as described above. You may be able to use the extracted image from the sdcard_image directory if it exists for the release you want to test.<br />
<br />
If you are brave enough to try out the beta release, you likely have good suggestions on what we can be doing better. Please share those suggestions with us at '''info@sanstreak.com''' .<br />
<br />
[[Category:Troubleshooting]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Template:Software_release_version_2.5.3&diff=5710Template:Software release version 2.5.32024-03-22T22:00:34Z<p>Tfischer: Version 2.5.3rc31</p>
<hr />
<div>= Version 2.5.3 =<br />
<br />
* Better user experience when using a tablet to control the camera.<br />
* Improved accuracy in trigger time value. First set of measurements showed +/- 10 ms or better of accurate time when using stratum 1 NTP server.<br />
* Manual save mode for easier integration into a larger data acquisition system.<br />
* Detects and reports via webUI if an [[Storage#Camera_does_not_use_my_new_storage_device|unsupported exFAT]] file system formatted storage device is installed.<br />
<br />
== Update file ==<br />
<br />
[[Updating edgertronic software|Update your edgertronic camera]] by copying the [https://www.edgertronic.com/releases/v2.5.3rc31/sanstreak_update.ssc.20240304145102.95.6dd23f0a.4d699d.v2_5_3rc31.tar v2.5.3rc31 update tarball file] to the big SD card, powering on the camera, and waiting around 7 minutes for the camera to finish updating.<br />
<br />
<span style="color:red"><br />
'''If you are updating from 2.4.1g6 or earlier, you will need to perform a [[User_Manual_-_Factory_reset|factory reset]] after the update.'''<br />
</span><br />
<br />
== Version details ==<br />
<br />
<pre><br />
Build host: linux-vm<br />
Built by: tfischer<br />
Build date: 20240304145102<br />
Build tag: ssc1<br />
Build hash: 6dd23f0a<br />
Build version: v2.5.3rc31<br />
</pre><br />
<br />
== CAMAPI documentation ==<br />
<br />
Python class exposing edgertronic CAMAPI via HTTP: [http://www.edgertronic.com/releases/camapi/camapi.2.5.3rc27.html v2.5.3rc27 documentation]<br />
<br />
== Release Notes ==<br />
<br />
Improvements since [[software release version 2.5.2]]:<br />
<br />
* Better settings layout on iPad to get rid of the extra whitespace. This resulted in a minor reordering of a couple of settings in the Options tab.<br />
* Changed reported trigger time from an integer to a floating point number. Tested the accuracy of the reported trigger using a stratum 1 NTP server connected on the local network. Improved how trigger time is captured to compensate for the non-real-time nature of the Linux operating system.<br />
* Improved memory BIST reporting.<br />
* Manual save mode for easier integration into a larger data acquisition system.<br />
* webUI detects and reports if an unsupported exFAT file system formatted storage device is installed.<br />
* The webUI help system now points to internet edgertronic wiki instead of cached wiki in camera.<br />
<br />
== Resolved defects ==<br />
<br />
=== 202309051411 Passing a parameterized filename via multicast trigger truncates filename at the ampersand character ===<br />
<br />
If you include in the multicast trigger UDP packet a filename like ''behind_home_plate_inning_1_pitch_27_&T'' the actual file saved will be ''behind_home_plate_inning_1_pitch_27_.mov''.<br />
<br />
=== 20210611094523 Factory reset may be required after camera software update ===<br />
<br />
After updating to v2.5.1 or newer from a version v2.4.1 or older, you may need to do a<br />
[[User_Manual_-_Factory_reset|factory reset]] after the update. The camera remembers all your customized camera settings during the update process. Somehow one of those settings is not being accepted by v2.5.1 or newer. Note that a factory reset will erase any custom interfaces file with a fixed IP address other than 10.11.12.13, which may cause you to temporarily lose your connection to the camera until you figure out which address is being used by the camera.<br />
<br />
This is due to unmodified default configuration files being saved which keeps newer software releases from using their updated default configuration files. The defect was fixed by deleting unmodified configuration files before the new software runs, forcing the new software to install its default version of the configuration file. The issue caused problems due to an update to the lighttpd web server configuration file to work around an ipad security change.<br />
<br />
This defect was resolve in release 2.5.2rc33, which wasn't widely distributed.<br />
<br />
=== 20220909151902 Camera doesn't properly handle the network gateway setting ===<br />
<br />
This defect goes to show if you don't test properly, it likely won't work. Hope is not a successful develop strategy. I didn't have a complex enough network setup to test the gateway functionality. Now I do. This defect has been resolved. <br />
<br />
=== 202301083425 Camera wasn't properly reporting temperatures below freezing ===<br />
<br />
When the camera internal temperature dropped below 0 Celsius, the calculation of the negative value wasn't done properly. This defect has been resolved.<br />
<br />
=== 20230214082349 CAMAPI cancel() doesn't propage changes made using reconfigure_run() ===<br />
<br />
When in review-before-save mode, if you change the camera's settings via the webUI (or reconfigure_run()) those changes do not properly propagate to the future captures when you invoke the CAMAPI cancel() method. This defect has been resolved.<br />
<br />
=== 20230227173428 WebUI overclock setting not being handled properly on an SC2 camera ===<br />
<br />
There was a javascript error when running with an SC2 8GB camera. This defect was introduced in a beta release, so most customers were not effected by the issue. This defect has been resolved.<br />
<br />
== SDK API changes ==<br />
<br />
* New [[Captured_video_queue_control|manual save]] mode.<br />
* Add support for CAMAPI [https://www.edgertronic.com/releases/camapi/camapi.html#HCamapi-delete_captured_videos delete_captured_videos()] method.<br />
<br />
== Developer changes ==<br />
<br />
* New [[Captured_video_queue_control|manual save]] mode.<br />
* CAMAPI [https://www.edgertronic.com/releases/camapi/camapi.html#HCamapi-get_captured_video_info get_captured_video_info()] trigger time is a floating point number instead of an integer and includes the anticipated filename that will be used unless overridden (e.g. via CAMAPI [[https://www.edgertronic.com/releases/camapi/camapi.html#HCamapi-selective_save selective_save()] parameter). If the user parameter was set, then get_captured_video_info() returns the user parameter as well.<br />
* Metadata file reports trigger time as a floating point number instead of an integer.<br />
* Better logging of all allowed values when CAMAPI run() is invoked.<br />
* The update tarball is moved to the SD card installed directory instead of being deleted.<br />
* Potential backwards compatibility issue for software reading metadata files. The trigger time is now a float instead of an integer.<br />
<br />
[[Category:Releases]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Software_releases&diff=5709Software releases2024-03-21T22:59:52Z<p>Tfischer: /* Anticipated features */</p>
<hr />
<div>{{Software release version 2.5.3}}<br />
<br />
{{Known defects}}<br />
<br />
= Software release images =<br />
<br />
{{Software release images}}<br />
<br />
= Older releases =<br />
<br />
All cameras can safely downgrade to version 2.5.2, 2.5.1, 2.4.1, 2.3.1 or 2.2.1. Software versions older than 2.2.1 have been removed due to hardware changes that make older versions incompatible with some cameras.<br />
* [[Software release version 2.5.2]]<br />
** Released: April 14, 2022<br />
* [[Software release version 2.5.1]]<br />
** Released: June 8th, 2021<br />
* [[Software release version 2.4.1]]<br />
** Released: March 24th, 2020<br />
* [[Software release version 2.3.1]]<br />
** Released: Jan 4th, 2019<br />
* [[Software release version 2.2.2]]<br />
** Released: Oct 22nd, 2017<br />
* [[Software release version 2.2.1]]<br />
** Released: Feb 24th, 2017<br />
* [[Software release version 2.1]]<br />
** Released: April 4th, 2015<br />
* [[Software release version 2.0]]<br />
** Released: Sept 11th, 2014<br />
* [[Software release version 1.3]]<br />
** Released: July 30th, 2014<br />
* [[Software release version 1.2]]<br />
** Released: April 5th, 2014<br />
* [[Software release version 1.1]]<br />
** Released: Jan 10th, 2014<br />
* [[Software release version 1.0]]<br />
** Released: Dec 7th, 2013<br />
<br />
After downgrading the camera, a [[User Manual - Factory reset|factory reset]] is recommended.<br />
<br />
= Software release repository =<br />
<br />
You can find all software releases at:<br />
<br />
* https://www.edgertronic.com/releases<br />
<br />
In addition to the releases, there is a software release schema JSON file and a JSON file containing all the information about the available software releases.<br />
<br />
== Schema for describing available releases ==<br />
<br />
The edgertronic-available-updates-schema.json file describes the format of the edgertronic-available-updates.json file. It is used with the [https://pypi.org/project/jsonschema/ <tt>jsonschema</tt>] python library to valid the available updates file.<br />
<br />
== Available updates ==<br />
<br />
A ''release'' is a tested edgertronic camera software update. We make other software updates available, such as beta releases, which you should not use. Occasionally I go and delete all the old updates that are not tested releases. I will also delete a previous release if it is causing too many customer support calls. It should always be safe to ''back rev'' - use an older release than the version currently running on the camera.<br />
<br />
The edgertronic-available-updates.json file contains the following information:<br />
<br />
* <tt>last_update_epoc</tt>: an integer indicating when the file was last updated. This is in epoc format to allow easy comparision on which of two files is the more current.<br />
* <tt>current_release</tt>: string indicating the release version name for the recommended release you should be running on all your edgertronic cameras.<br />
* <tt>releases</tt>: dictionary (object in JSON terminology) containing a dictionary for each release.<br />
<br />
Each release is a JSON object who name is the release name and the properties include:<br />
* <tt>state</tt>: string that is one of ''released'', ''beta'', ''internal_use_only'', or ''broken''.<br />
* <tt>release_date</tt>: Human readable date string in the format ''MMM DD, YYYY''.<br />
* <tt>size</tt>: integer size in bytes<br />
* <tt>md5sum</tt>: hex string encoded value starting with ''0x''.<br />
* <tt>description</tt>: Human readable string<br />
* <tt>release_notes_url</tt>: URI pointing to the human readable release notes.<br />
* <tt>download_url</tt>: URI pointing to the update tarball file.<br />
<br />
= Anticipated features =<br />
<br />
Let us know what features you would like to see added to the edgertronic high speed camera. Here are some requests we have received:<br />
<br />
* Pretrigger percentage greater than 100%. This allow devices, like radar, that can trigger the camera after the action has occurred to avoid saving post action frames.<br />
* Over the network software update - allow upload of an update file from the user's computer<br />
* Support [https://en.wikipedia.org/wiki/Design_rule_for_Camera_File_system Design rules for Camera File system]<br />
* Usability - when using CIFS, don't check for free space so often<br />
* Update Open Source packages<br />
* Performance tuning / SPI overhead investigation<br />
* White balance, focus aid and exposure histogram<br />
* Camera auto discovery on the network<br />
* User name and password to control who can browse to the camera<br />
* Add support for <video> tag in HTML served up by the camera<br />
* Add support for camera telling web U.I. when a status change occurs instead of using 1 second polling<br />
* User supplied gamma correction table.<br />
* Image based triggering<br />
* When one camera on the local network is triggered, have it use [[SDK_-_Multicamera_network_trigger#Camera_initiated_multi-camera_trigger|multicast network trigger]] to trigger the rest of the cameras on the same local network. Available now using [[Extending edgertronic capabilities - Extending edgertronic camera functionality|User Added URLs]] facility.<br />
* Support rotating the camera image 90 degrees so the camera can take highest frame rate capture of vertical images.<br />
* Node.js binding for CAMAPI (python and .NET binding already available)<br />
* Enhance multi-rate capture where the physical trigger button can be used to switch from post1 capture rate to post2 capture rate.<br />
<br />
= Requested features =<br />
<br />
* Allow background triggers while in review before save. Allow captures to be discarded after saved in review before save.<br />
* Having the camera POST its status (namely when a file is done encoding) to a user specified HTTP address. This may be possible using the [[Extending edgertronic capabilities - Extending edgertronic camera functionality|User Added URLs]] facility.<br />
* Sound based triggering - This will not be supported.<br />
* Halogen color temp setting - Because of the decreased use in halogen lighting this feature is less useful over time.<br />
* Multi-trigger in multi-shot capture - Trigger capturing the next video while the current video is being captured, with no frames being dropped.<br />
* Extend CAMAPI <tt>review_frame()</tt> method to return the actual frame image instead of the status<br />
* User added URL that copies videos and metadata from SD card to a network computer using FTP.<br />
* Allow genlock signaling to just trigger a camera. Imagine an environment with three genlocked cameras focused on the local event activity and a forth camera taking an overview video that also covers before and after activity. It would be nice to simply use the trigger signal from the genlock source camera.<br />
* Enable audio recording<br />
* UDP discovery <br />
* Trackman ready physical extension for the camera<br />
* A submillisecond time integration with tracking devices<br />
* Background save for selective save<br />
* URL factory_reset3 which performs a factory reset on everything but the /etc/network/interfaces network settings.<br />
* Trigger delay compensation - camera setting to compensate for the delay between when an event occurs and delay in the system that is generating the trigger. For example, a human's response time is pressing a trigger button after seeing lightning is around 400 ms. The implementation would be to increase the pre-trigger time by the trigger delay compensation, then when saving discard all the frames from the point in the capture that is trigger minus the trigger delay compensation value.<br />
* Enhance CAMAPI <tt>selective_save()</tt> method to allow specifying a frame dropping pattern as the video is being saved. The frames to be dropped would be specified via a list of (starting_frame_number, save_ratio, ratio_slope) tuples. Imagine a captured video of a baseball pitch at 700 fps. Assume the capture duration is 3 seconds, with 1.4 seconds being the wind up, 0.7 second pitch and 0.9 seconds of follow through with the playback frame rate set to 30 fps. Then the wind up could be saved at half playback speed using 60 fps (meaning of the 980 frames saved in the first 1.4 seconds at 700 fps, only save 84 frames, or dropping 11.6 frames for each frame saved), make a transition from 60 fps to 700 fps, save the pitch at 700 fps, then again transition back from 700 fps to 60 fps.<br />
<br />
[[Category: Releases]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Extending_edgertronic_capabilities_-_Hardware_watchdog_timer&diff=5708Extending edgertronic capabilities - Hardware watchdog timer2024-03-01T17:41:38Z<p>Tfischer: /* Watchdog timer support */</p>
<hr />
<div>{{Extending edgertronic capabilities Navigation || [[Extending edgertronic capabilities - USB serial support|USB serial support]]}}<br />
<br />
<div style="width:20%;height:100%;float:right;padding-left:10px;overflow-y: scroll;"><br />
{{Extending edgertronic capabilities TOC}}<br />
</div><br />
<br />
= Background =<br />
<br />
The DM368 processor used in the edgertronic camera supports a hardware watchdog timer. The purpose of a hardware watchdog timer is to perform a hardware reset if the software crashes.<br />
<br />
= Watchdog timer support =<br />
<br />
Support to use the hardware watchdog timer is done using the [[Adding python code and URLs|user added URLs]] edgertronic SDK mechanism. Simple copy the '''app_ext_wdt.py''' and '''app_ext_wdt.html''' files from '''http://10.11.12.13/static/sdk/app_ext''' and save it in the root directory of an SD card, insert the SD card into the camera, and then power on the camera.<br />
<br />
When using the <tt>app_ext_wdt.py</tt> code, the hardware watchdog timer is enabled by default. You can the /wdt_control URL to disable the watchdog timer.<br />
<br />
= Watchdog URLs =<br />
<br />
{| class="wikitable"<br />
|-<br />
|| /wdt_control || Enable or disable hardware watchdog timer and python logic to service the watchdog timer.<br />
|-<br />
|| /wdt_get_status || Returns a JSON encoded dictionary containing the watchdog timer counts values including how many times the camera's hardware watchdog timer has reset the processor since power on and how many times the camera's hardware watchdog timer has reset the processor since the last factory reset.<br />
|-<br />
| /wdt_force_reset || Test feature which causes the processor to reset the camera in around 60 seconds.<br />
|}<br />
<br />
= Watchdog timer functional description =<br />
<br />
The DM368 hardware watchdog timer is made available to user space via the '''<tt>[https://www.kernel.org/doc/Documentation/watchdog/watchdog-api.txt /dev/watchdog]</tt>''' device file. Holding an open file handle to the '''<tt>/dev/watchdog</tt>''' device file enable the hardware watchdog timer. Writing a byte of data to the open file handle services the hardware watchdog timer. The DM368 hardware watchdog timer will reset the processor if more than 56 seconds goes by without the hardware watchdog timer being serviced.<br />
<br />
When the user-added URL file '''app_ext_wdt.py''' is added to the camera, then when the camera boots the '''<tt>/dev/watchdog</tt>''' device file is opened and a periodic software timer is enabled to service the hardware watchdog timer every 10 seconds. If something causes the periodic software timer to fail to run, the hardware watchdog timer will cause the camera to reboot. The logic is implemented in the same user space python instance as the python code that responds to CAMAPI and also controls the video encoding, so if the python code stops responding, that should also cause the periodic software timer to fail, thus causing the camera to reboot.<br />
<br />
= Watchdog webUI status window =<br />
<br />
[[File:Web-ui-app-ext-wdt.png|500px|right]]<br />
<br />
Clicking on the wrench icon to open settings, then clicking on the ''Watchdog'' tab will cause the watchdog timers counts to be displayed. These values cannot be changed via the webUI.<br />
<br />
<br />
{{Extending edgertronic capabilities Navigation || [[Extending edgertronic capabilities - USB serial support|USB serial support]]}}</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Template:Known_defects&diff=5707Template:Known defects2024-02-28T20:52:54Z<p>Tfischer: /* 20231223103452 Overlaying text and graphics causes crash for very small image heights */</p>
<hr />
<div>== Known defects ==<br />
<br />
The following is the list of known defect in the version 2.5.x releases. <span style="color:#dd38da">[1]</span><br />
<br />
=== 20231223103452 Overlaying text and graphics causes crash for very small image heights ===<br />
<br />
If you set the vertical resolution to 96 and enable all overlays, the camera software crashes and requires a factory reset to recover.<br />
<br />
=== 20231005074253 Spurious Genlock Error reported when camera is properly genlocked ===<br />
<br />
Occasionally a Genlock Error is incorrectly reported by the camera properly configured as a genlock receiver. The captured video is correct. <br />
<br />
=== 20220415111422 Setting the DNS server IP address via CAMAPI net_set_configuration() is broken ===<br />
<br />
requested_dns_server and dns_server keys not handled properly in dictionary passed to <tt>net_set_configuration()</tt>.<br />
<br />
=== 20190302135422 SC2 SC2+ SC2X pretrigger frame count inaccurate on when the pretrigger buffer is not full and a trigger occurs ===<br />
<br />
If you trigger a SC2, SC2+, or SC2X camera before the pretrigger buffer fills, the metadata file will have an inaccurate number for the frames captured before the trigger event. This also effects review mode.<br />
<br />
=== 201412161349 Multishot Genlock buffering can get out of sync due to power cycle or selective save ===<br />
<br />
No fix defect -- there are no plans to fix this defect.<br />
<br />
If you are in the middle of a Genlocked Multishot sequence and power cycle one of the cameras, the buffering will be out of sync once the camera is powered back on. Until the cameras configured for genlock talk to each other over the network cable, there is no way to resynchronize which multishot buffer is being used.<br />
<br />
Work around: Power cycle all cameras that are configured and cabled for multishot and genlock.<br />
<br />
Similarly, if you are in the middle of a Genlocked Multishot sequence and decide to save your video set, you cannot just press the save button on the genlock source camera's GUI since only the videos on the genlock source will be saved. <br />
<br />
Work around: you must press the save button on the genlock source and all the receiver camera GUIs.<br />
<br />
=== 201409120935 Updating camera fails if there is a space ' ' character in the update tarball filename ===<br />
<br />
If you download the update tarball more than once, some operating systems put a space in the file name (e.g. " (1)" so the file being downloaded will have a unique filename. If you use the file with the space in the filename, the update will fail. To work around the defect, remove the big SD card, delete the file with a space in the filename and store the original file on the big SD card. The camera will then update correctly.<br />
<br />
=== 201409091802 Cancel trigger at the end of capture misbehaves ===<br />
<br />
No fix defect -- This is really not a defect.<br />
<br />
On occasion, if you cancel the trigger just as the post trigger capture buffer is being filled, the camera will calibrate then save the video data instead of properly handling the cancel.<br />
<br />
Work around: This is a race condition. The user thinks the camera is still capturing data when they press cancel, but in fact the camera has already switched to saving the captured video. Simply trim the video to get back to filling the pre-trigger buffer.<br />
<br />
=== 201408271324 Genlock false triggers ===<br />
<br />
No fix defect -- there are no plans to fix this defect; the hardware design doesn't support any means to fix the issue.<br />
<br />
This defect only occurs when using the [[Genlock]] feature with multiple cameras and a genlock cable.<br />
<br />
Plugging in genlock cable may trigger both genlock source and receiver cameras. Unplugging genlock cable may trigger both genlock source and receiver cameras. Powering off a genlocked camera may trigger any other connected cameras.<br />
<br />
Work around: connect all genlock cables before powering on the cameras.<br />
<br />
=== 201312111624 File timestamp is in GMT ===<br />
<br />
No fix defect -- there are no plans to fix this defect.<br />
<br />
The camera was intentionally designed to use GMT for the timezone when saving video files. Some might consider this a defect (issue #182).<br />
<br />
As of 2.4.1 you can create your own filename pattern and not use seconds since 1970 in GMT timezone.<br />
<br />
=== 201312021613 Browser forward and back buttons may change camera settings ===<br />
<br />
No fix defect -- there are no plans to fix this defect.<br />
<br />
If you browse to another site and then use the browser back button to return to viewing the camera, your camera settings may have changed.<br />
<br />
Work around: either don't browse to another web site or don't use the back button when you do; simply browse to the IP address of the camera.<br />
<br />
=== 201311041114 Playing last recorded video can fail in rare cases ===<br />
<br />
No fix defect -- there are no plans to fix this defect.<br />
<br />
The camera will automatically switch which storage device is used when the current storage device fills up and another, non-full, storage device is available. You can not play the video recorded right before the switch occurs since the active storage device has changed. <br />
<br />
Work around: You can remove the storage device and properly play the video by retrieving the video file from the non-active storage device.<br />
<br />
=== 201311101454 CAMAPI does not detect new space on mounted storage device ===<br />
<br />
No fix defect -- there are no plans to fix this defect.<br />
<br />
CAMPI handles changes in storage status using an interrupt scheme (mdev). If your SD card is full and you telnet into the camera and delete some files, no event occurs, so CAMAPI doesn't detect there is now room and the memory full message is displayed.<br />
<br />
Workaround: after deleting the files, remove and reinsert the storage device to create a change in storage status event.<br />
<br><br />
<br><br />
<span style="color:#dd38da">[1]</span> Naming convention for the 'Defect numbers' are in the format '''YYYYMMDDHHMMSS'''</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=NTP_Network_Time_Protocol&diff=5706NTP Network Time Protocol2024-02-15T03:39:27Z<p>Tfischer: /* NTP configuration */</p>
<hr />
<div>= Camera time =<br />
<br />
Normally the camera gets its time set via the web user interface. The host computer's current date and time is passed to the camera's battery powered hardware real time clock. After that, each time the camera powers on, the hardware real time clock is read to set the Linux operating system time.<br />
<br />
The camera time can also be set automatically, over the network, using NTP - Network Time Protocol. The NTP daemon, when configured, will regularly update the Linux wall clock. If the NTP server is located on the same local area network (subLAN), and several cameras are used, the difference in time of the Linux wall clocks in the cameras should be maximum of a few milliseconds.<br />
<br />
= NTP configuration =<br />
<br />
== WebUI based NTP configuration ==<br />
<br />
As of release v3.5.3 you can set the NTP server address via the web user interface. Go into settings by clicking on the wrench. You need to first select pro mode, then the network tab will be visible. In the network tab you can set the NTP DNS name or IP address. In the example, I am using a TimeMachine TM1000A which is configured on my local area network at address <tt>10.111.0.169</tt>.<br />
<br />
== Manual configuration ==<br />
<br />
You have full control over the NTP configuration when you provide provide the file [http://support.ntp.org/bin/view/Support/ConfiguringNTP '''ntp.conf'''] by saving the file in the root directory on either the SD card or a USB storage device. Power cycle the camera and the ntp.conf file will be stored in the camera's internal read-write file system (<tt>'''/mnt/rw/etc/ntp.conf'''</tt>). Power cycle the camera again and NTP will be enabled and using the configuration from the ntp.conf file. If you have several cameras to configure, create a <tt>keep-files</tt> file in the root directory of the SD card. Be sure to later delete the <tt>keep-files</tt> file if you use the SD card for storing videos.<br />
<br />
=== Example ntp.conf file ===<br />
<br />
Change the server value to the NTP server of your choice. In the example below it is set to '''pool.ntp.org'''.<br />
<br />
<pre style="background:#d6e4f1"><br />
driftfile /mnt/rw/etc/ntp.drift<br />
statsdir /var/log/ntp_statistics<br />
<br />
# Specify one or more NTP servers.<br />
server pool.ntp.org<br />
</pre><br />
<br />
== Testing NTP configuration ==<br />
<br />
You can telnet into the camera and run<br />
<br />
<pre style="background:#d6e4f1"><br />
date 010100002001.30 ; hwclock -w<br />
</pre><br />
<br />
which will set the battery powered hardware real time clock to '''Mon Jan 1 00:00:30 UTC 2001'''. Power cycle the camera. Check the camera's date via the web interface or telnet into the camera and run<br />
<br />
<pre style="background:#d6e4f1"><br />
date<br />
</pre><br />
<br />
The current time and date should be shown. If not, check '''/var/log/messages''' or via the web interface <br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13/static/log/messages <br />
</pre><br />
replacing 10.11.12.13 with your camera's IP address as necessary.<br />
<br />
Power cycle the camera. When it reboots, verify the camera time is set correctly.<br />
<br />
= DNS configuration =<br />
<br />
If you use a computer name, such as <tt>'''pool.ntp.org'''</tt> when specifying the server in the <tt>'''ntp.conf'''</tt> file, then you need to make sure the camera's [[DNS Domain Name Services|'''DNS server configuration''']] will work in your network environment.<br />
<br />
[[Category:Networking]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=NTP_Network_Time_Protocol&diff=5705NTP Network Time Protocol2024-02-15T03:39:13Z<p>Tfischer: </p>
<hr />
<div>= Camera time =<br />
<br />
Normally the camera gets its time set via the web user interface. The host computer's current date and time is passed to the camera's battery powered hardware real time clock. After that, each time the camera powers on, the hardware real time clock is read to set the Linux operating system time.<br />
<br />
The camera time can also be set automatically, over the network, using NTP - Network Time Protocol. The NTP daemon, when configured, will regularly update the Linux wall clock. If the NTP server is located on the same local area network (subLAN), and several cameras are used, the difference in time of the Linux wall clocks in the cameras should be maximum of a few milliseconds.<br />
<br />
= NTP configuration =<br />
<br />
== WebuI based NTP configuration ==<br />
<br />
As of release v3.5.3 you can set the NTP server address via the web user interface. Go into settings by clicking on the wrench. You need to first select pro mode, then the network tab will be visible. In the network tab you can set the NTP DNS name or IP address. In the example, I am using a TimeMachine TM1000A which is configured on my local area network at address <tt>10.111.0.169</tt>.<br />
<br />
== Manual configuration ==<br />
<br />
You have full control over the NTP configuration when you provide provide the file [http://support.ntp.org/bin/view/Support/ConfiguringNTP '''ntp.conf'''] by saving the file in the root directory on either the SD card or a USB storage device. Power cycle the camera and the ntp.conf file will be stored in the camera's internal read-write file system (<tt>'''/mnt/rw/etc/ntp.conf'''</tt>). Power cycle the camera again and NTP will be enabled and using the configuration from the ntp.conf file. If you have several cameras to configure, create a <tt>keep-files</tt> file in the root directory of the SD card. Be sure to later delete the <tt>keep-files</tt> file if you use the SD card for storing videos.<br />
<br />
=== Example ntp.conf file ===<br />
<br />
Change the server value to the NTP server of your choice. In the example below it is set to '''pool.ntp.org'''.<br />
<br />
<pre style="background:#d6e4f1"><br />
driftfile /mnt/rw/etc/ntp.drift<br />
statsdir /var/log/ntp_statistics<br />
<br />
# Specify one or more NTP servers.<br />
server pool.ntp.org<br />
</pre><br />
<br />
== Testing NTP configuration ==<br />
<br />
You can telnet into the camera and run<br />
<br />
<pre style="background:#d6e4f1"><br />
date 010100002001.30 ; hwclock -w<br />
</pre><br />
<br />
which will set the battery powered hardware real time clock to '''Mon Jan 1 00:00:30 UTC 2001'''. Power cycle the camera. Check the camera's date via the web interface or telnet into the camera and run<br />
<br />
<pre style="background:#d6e4f1"><br />
date<br />
</pre><br />
<br />
The current time and date should be shown. If not, check '''/var/log/messages''' or via the web interface <br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13/static/log/messages <br />
</pre><br />
replacing 10.11.12.13 with your camera's IP address as necessary.<br />
<br />
Power cycle the camera. When it reboots, verify the camera time is set correctly.<br />
<br />
= DNS configuration =<br />
<br />
If you use a computer name, such as <tt>'''pool.ntp.org'''</tt> when specifying the server in the <tt>'''ntp.conf'''</tt> file, then you need to make sure the camera's [[DNS Domain Name Services|'''DNS server configuration''']] will work in your network environment.<br />
<br />
[[Category:Networking]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Ethernet_networking&diff=5704Ethernet networking2024-02-15T03:38:30Z<p>Tfischer: /* User configurable network settings */</p>
<hr />
<div>== Overview ==<br />
<br />
The edgertronic network configuration documentation is the longest, most detailed documentation for the entire camera. If you misconfigure the camera's network settings, which you typically figure out when you can no longer browse to the camera, then you should perform a [[Multi-function_button#Factory_reset|factory reset]] and try again.<br />
<br />
For the simplest case, you connect a network cable between your laptop and the camera, configure your laptop to use IP address 10.11.12.1 (as described below) and you are ready to us the camera by browsing to http://10.11.12.13 . If this is the first time using the camera, start with this simple configuration so you can first get familiar with the camera before you more on to a more complex networking configuration.<br />
<br />
<blockquote style="background-color: khaki; margin: 1em; padding-left: .5em; padding-right: .5em; border: solid thin gray; width: 50%;"><br />
Hint: to determine the camera's IP address, first verify the [[User_Manual_-_Multicolored_camera_LEDs#System_LED|system LED]] is either yellow, magenta, or blue, indicating the camera has an IP address. Then put the camera's SD card into your computer and check the file names to find the IP address.<br />
</blockquote><br />
<br />
== Network connection and status LEDs ==<br />
<br />
{|<br />
|<br />
There is a standard 10/100 Mbit/sec RJ45 Ethernet jack on the back of the edgertronic high speed camera.<br />
<br />
The camera has two Ethernet related LEDs, located on the Ethernet jack.<br />
<br />
{|class="wikitable"<br />
! LED location !! Ethernet LED !! Meaning<br />
|-<br />
| Back of camera on the Ethernet connector near power connector || Network<br>link and activity || Off - no network connection<br>On - network connection<br>Blinking - network activity, packets being sent or received<br />
|-<br />
| Back of camera on the Ethernet connector near USB connectors || Network<br>10 or 100 Mbit/s || Off - 10 Mbit/s link<br> On - 100 Mbit/s link<br />
|}<br />
<br />
|| [[File:ssc1-rev-d-back-with-black-panel-labeled-leds.jpg|400px|thumb|right|back of edgertronic high speed camera with Ethernet labeled]]<br />
|}<br />
<br />
== Camera IP address ==<br />
<br />
When using your camera, it will be connected to a network. Every device on a network must have an IP address that is unique to that network, including the camera and the laptop or tablet controlling the camera.<br />
<br />
=== Simple instructions ===<br />
<br />
If you have one camera, then you shouldn't need to change the camera's network settings. If your camera is connected directly to your laptop with an Ethernet cable, then the camera's default IP address will be '''10.11.12.13'''.<br />
<br />
To allow you to figure out the camera's IP address, the camera creates a file on the SD card where the IP address is contained in the filename. After the camera LED is solid green, remove the SD card and check the file whose name starts with '''cam_ip_address''' and ends with the camera's IP address. For example, if you are using a DHCP server which assigned an address of 10.111.0.63 to your camera, then you should find the file ''cam_ip_address.10.111.0.63'' on the SD card.<br />
<br />
Once you know the camera's IP address and have properly configured your laptop (instructions later in this article), you can browse to the camera using the Chrome web browser, with a URL like http://10.11.12.13 or http://10.111.0.63<br />
<br />
{{ReplaceIP}}<br />
<br />
=== User configurable network settings ===<br />
<br />
[[File:Settings-network-tab.png|400px|right|thumb|Network settings tab]]<br />
<br />
In software release 2.5.2, we added the ability for the user to configure the network settings via the web user interface.<br />
<br />
If you do not understand TCP/IP network configuration, you may enter the wrong values and then lose the ability to communicate with the camera over the network. When this happens, you will need to do <big>[[Multi-function_button#Factory_reset|factory reset]]</big>. If you call us for help, we will first ask you to do a [[Multi-function_button#Factory_reset|factory reset]].<br />
<br />
To configure the camera's network related settings, click on the wrench [[Image:Settings_button_20150518222117.png|40px]], and when the settings modal appears, click on the PRO button [[Image:Setting-pro-button.png|40px]] at the top, then the Network tab will show up. Finally click on the '''Network tab'''.<br />
<br />
If you are using ''Internet networking'', where your computer and the camera are connected to your existing network infrastructure, then click on the '''DHCP with fallback to fixed address''' button, and ignore the rest of the settings. You will need to either look at the DHCP server log to determine each camera's IP address, or remove the SD card as described in the ''Simple Instructions'' above.<br />
<br />
If you are using multiple cameras in a ''Stand alone networking'' configuration - where your computer is connected to a network switch, and several cameras are connected to the same network switch, then your easiest network configuration option is to use fixed IP addressing, click on the '''Fixed''' button, configuring each camera using these steps:<br />
<br />
# Set your computer's Ethernet address to 10.11.12.1 as described later in this article.<br />
# Put a label on each camera with the IP address you are going to assign. Start with 10.11.12.13, and then increment the last number for each successive camera, e.g. 10.11.12.14, 10.11.12.15, ...<br />
# '''Connect one camera up at a time''' directly to your laptop and browse to http://10.11.12.13 which is the camera's factory reset default IP address when using stand-alone networking.<br />
#* If your browser reports an error, then perform a [[Multi-function_button#Factory_reset|factory reset]] on the connected camera. <br />
# Set the ''Connection type'' to Fixed<br />
# Set the ''IP Address'' to the address on the camera label that you added in the step above.<br />
# Make sure the ''Netmask'' is set to 255.255.255.0<br />
# Typically, leave the ''Gateway'' and ''NTP Server'' values blank or unchanged. Of course if you have a multi-LAN configuration the gateway will need to be set. If you have access to a local stratum 1 [[NTP Network Time Protocol|NTP]] server, then setting the NTP server field will allow the camera to provide a more accurate trigger time. <br />
# Click outside the Setting modal to close the modal and activate the settings. Since the camera will have a different IP address, the browser will redirect to the new address you assigned and attempt to re-open the live view page, which takes around 90 seconds. If the browser can not communicate with the camera, make sure:<br />
#* you have only one camera connected,<br />
#* you assigned the IP address on the camera label, and<br />
#* the filename on the SD card contains the IP address you expected.<br />
Perform a [[Multi-function_button#Factory_reset|factory reset]] if you can't communicate with the camera and try these steps again.<br />
<br />
== Ethernet configuration ==<br />
<br />
You have two Ethernet network configuration choices.<br />
<br />
* Internet networking - Connect your computer and the camera to your existing network infrastructure.<br />
* Stand alone networking - Connect a network cable between your laptop and the camera. If you are connecting more than one camera, you will need a network switch and an additional network cable.<br />
<br />
== Ethernet bandwidth ==<br />
<br />
The maximum Ethernet transfer rate for an edgertronic camera is 60 Mbits/sec.<br />
<br />
When the a host computer is uploading a video file from the camera while the camera is busy capturing the next video, the transfer rate is 16 Mbits/sec.<br />
<br />
== Camera connected to DHCP network ==<br />
<br />
Your PC or laptop should already be connected to the existing network that includes a DHCP server, possibly via a wifi connection. If your PC can access the Internet (and no company IT person touched your laptop), then most likely there is a DHCP server on your network as well.<br />
<br />
Your network will assign an IP address to the camera using the DHCP protocol. The camera creates a file on the big SD card with the assigned IP address in the filename. After the camera LED goes solid green, remove the big SD card, insert it into your PC and you will be able to read the IP address of the camera.<br />
<br />
'''If the camera's system LED is solid blue, your camera is using a DHCP assigned IP address.'''<br />
<br />
Once you know the camera's IP address, stick the SD card back in the camera and type the IP address into the Chrome web browser's address bar.<br />
<br />
Using DHCP assigned network addresses works best when the DHCP server remembers the IP address it assigns to devices so that the camera will get the same IP address each time the camera is powered on. Modern DHCP server's work in this manner, so you can put a sticker on the camera with the assigned DHCP IP address.<br />
<br />
== Stand alone networking - laptop to camera networking ==<br />
<br />
If you are using the camera in a location where it is inconvenient to connect to an existing network, you can simply connect a network cable between your laptop and the camera. The camera will detect there is no network infrastructure and configure itself accordingly. You will need to modify your laptop network settings so the laptop can communicate with the camera.<br />
<br />
Your laptop needs to be configured to use IP address '''10.11.12.1'''. If you are familiar with laptop network configuration, you can make the changes now or follow the step-by-step instructions below.<br />
<br />
=== Mac OS X stand alone network configuration===<br />
<br />
Most Mac computers are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to use a fixed IP address when your Mac and camera are connected together using an Ethernet cable.<br />
<br />
Screenshots from Mac OS X 10.11.6.<br />
<br />
* Pull down the Apple menu in the top left corner and select '''System Preferences'''.<br />
[[File:Mac-apple-menu-dropdown-annotated.png|300px|none]]<br />
<br />
* In System Preferences select '''Network'''.<br />
[[File:Mac-system-prefrences-annotated.png|600px|none]]<br />
<br />
* In the left pane of the ''Network'' dialog, select '''Ethernet'''. In the right pane of the ''Network'' dialog, set ''Configure IPv4'' to '''Manually''' configured. Set the IP address to '''10.11.12.1''' and the Subnet Mask to '''255.255.255.0'''. The Router setting is not important - 10.11.12.254 is fine. Then click the ''Apply'' button.<br />
[[File:Mac-network-dialog-annotated.png|600px|none]]<br />
<br />
==== Troubleshooting Mac OS X networking ====<br />
<br />
===== Can not connect to camera after updating Mac OS =====<br />
<br />
Several customers reported after they updated their laptop, they could not longer connect to the camera. The problem is the Apple update process can corrupt your network settings.<br />
<br />
One solution is to connect the camera to the computer and then delete the network adapter setting that was causing problems. Reboot laptop and then re-create the network setup.<br />
<br />
Once your laptop is configured, you can browse to the camera using the URL:<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
=== Ubuntu OS stand alone network configuration ===<br />
<br />
*Ubuntu Version: 20.04<br />
<br />
Please follow the below mentioned simple steps to change the configuration to use a fixed IP address when your computer and camera are connected together using an Ethernet cable:<br />
<br />
*Go to 'Settings' -> 'Network' and enable the 'Wired' option by clicking the button (it looks like a switch) if it is 'off' by default.<br />
*Click the small 'setting' icon (it looks like a 'bearing' sign) very next to the button(switch) mentioned above.<br />
*You can see a pop-up modal form with the caption 'Wired' with options for configuring 'IPv4', 'IPv6', and other parameters.<br />
*Click 'IPv4', select 'IPv4 Method' as 'Manual', and under the 'Addresses' tab, enter the following IP numbers:<br />
** Address: 10.11.12.1<br />
** Netmask : 255.255.255.0<br />
** Gateway : ''leave blank''<br />
*Choose 'Automatic' for 'DNS' and 'Routers' configs in the same modal box.<br />
<br />
*Click'Apply' and close the modal box.<br />
*Next, you can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), just only one change to 'IPv4 Method' as 'Automatic(DHCP)' from 'Manual'.<br />
<br />
=== Windows 11 stand alone network configuration ===<br />
<br />
Most computers running Windows 11 are configured to allow Ethernet to work automatically over an Internet connection (meaning the IP address is assigned by a DHCP server). You need to change the Windows 11 configuration to use a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Slide your mouse to the bottom of the screen, and click on the search icon which is just to the right of the 4 pane blue window icon.<br />
* Type '''network''', then click on the Control Panel icon.<br />
[[File:Win10a-control-panel.png|600px|none]]<br />
<br><br />
* In the left panel of the ''Network and Sharing Center'', double-click on '''Change adapter settings'''.<br />
[[File:Win11b-network-and-sharing-center.png|600px|none]]<br />
<br><br />
* In the ''Network Connections'' window, double-click on '''Local Area Network'''.<br />
[[File:Win10c-network-connections.png|600px|none]]<br />
<br><br />
* In the "Local Area Network Connection Status" window, press the '''Properties''' button.<br />
[[File:Win11d-local-area-connecton-status.png|200px|none]]<br />
<br><br />
* In the "Local Area Connection Properties" window, select '''Internet Protocol version 4 (TCP/IPv4)''' and press the '''Properties''' button.<br />
[[File:Win11e-local-area-connection-properties.png|200px|none]]<br />
<br><br />
* Select '''Use the following IP address:''' and enter the following settings<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
[[File:Win11f-internet-protocol-version-4-tcpip-properties.png|200px|none]]<br />
<br />
<br><br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
Many thanks to Piroz for providing the Windows 11 screen shots used above.<br />
<br />
=== Windows 10 stand alone network configuration ===<br />
<br />
Most computers running Windows 10 are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to use a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Slide your mouse to the bottom of the screen, and click inside the search box which is just to the right of the 4 pane blue window icon.<br />
* Type '''network''', then click on the Control Panel icon.<br />
[[File:Win10a-control-panel.png|600px|none]]<br />
<br><br />
* In the left panel of the ''Network and Sharing Center'', double-click on '''Change adapter settings'''.<br />
[[File:Win10b-network-and-sharing-center.png|600px|none]]<br />
<br><br />
* In the ''Network Connections'' window, double-click on '''Local Area Network'''.<br />
[[File:Win10c-network-connections.png|600px|none]]<br />
<br><br />
* In the "Local Area Network Connection Status" window, press the '''Properties''' button.<br />
[[File:Win10d-local-area-connecton-status.png|200px|none]]<br />
<br><br />
* In the "Local Area Connection Properties" window, select '''Internet Protocol version 4 (TCP/IPv4)''' and press the '''Properties''' button.<br />
[[File:Win10e-local-area-connection-properties.png|200px|none]]<br />
<br><br />
* Select '''Use the following IP address:''' and enter the following settings<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
[[File:Win10f-internet-protocol-version-4-tcpip-properties.png|200px|none]]<br />
<br><br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
=== Windows 8 stand alone network configuration ===<br />
<br />
Most computers running Windows 8 are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to use a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Slide your mouse to the upper right corner to bring up the ''charms bar'' and select '''Start'''.<br />
* Type '''network''' to bring up the Network browser window.<br />
* Select the '''Network''' tab and then the '''Properties''' icon.<br />
[[File:Win8-network-window-annotated.png|600px|none]]<br />
<br><br />
* In the left panel of the ''Network and Sharing Center'', select '''Change adapter settings'''.<br />
[[File:Win8-network-and-sharing-center-window-annotated.png|600px|none]]<br />
<br><br />
* In the ''Network Connections'' window, select '''Ethernet'''.<br />
[[File:Win8-network-connections-window-annotated.png|600px|none]]<br />
<br><br />
* In ''Ethernet Properties'', select '''Internet Protocol version 4 (TCP/IPv4)''' and press the '''Properties''' button.<br />
[[File:Win8-ethernet-properties-dialog-annotated.png|300px|none]]<br />
<br><br />
* Select '''Use the following IP address:''' and enter the following settings<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
[[File:Win8-internet-version-4-tcp-ipv4-properties-dialog-annotated.png|300px|none]]<br />
<br><br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
<br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
=== Windows 7 stand alone network configuration ===<br />
<br />
Most computers running Windows 7 are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to used a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Click on the Start icon in the lower left corner.<br />
<br />
[[File:Win-7-network-start-button.png]] <br />
<br />
* Click on Control Panel on the right side.<br />
<br />
[[File:Win-7-network-control-panel-select.png|300px]]<br />
<br />
* Click on Network and Internet.<br />
<br />
[[File:Win-7-network-control-panel.png|500px]]<br />
<br />
* Click on Network and Sharing Center.<br />
<br />
[[File:Win-7-network-control-panel-network-and-internet.png|500px]]<br />
<br />
* Click on Change adapter settings.<br />
<br />
[[File:Win-7-network-control-panel-network-and-sharing.png|500px]]<br />
<br />
* Click on Local Area Connection.<br />
<br />
[[File:Win-7-network-control-panel-adapter-settings.png|500px]]<br />
<br />
* Click on Properties.<br />
<br />
[[File:Win-7-network-control-panel-local-area-network.png|500px]]<br />
<br />
* Click on Internet Protocol Version 4 (TCP/IPv4).<br />
<br />
[[File:Win-7-network-control-panel-network-adapter-properties.png|500px]]<br />
<br />
*Set the following values:<br />
<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
<br />
[[File:Win-7-network-control-panel-network-adapter-tcpv4-settings.png|500px]]<br />
<br />
<br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
<br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
=== Android stand alone Ethernet network configuration ===<br />
<br />
A customer asked if an Ethernet dongle to a Samsung Galaxy Android tablet would work. "It should" was my reply, along with finding and buying the [https://www.amazon.com/Plugable-Ethernet-Compatible-Raspberry-AX88772A/dp/B00RM3KXAU Plugable USB 2.0 OTG Micro-B Ethernet Adaptor] for around $14 so I could run a quick test. Years ago I met Bernie, the owner of Plugable at an embedded Linux conference. I became a fan as Plugable cares about Linux support and we all know Android runs on top of Linux. We all know the edgertronic camera runs Linux too. My office is full of Plugable gear.<br />
<br />
I had it working in 10 minutes. Here are the steps I followed:<br />
<br />
# attach Plugable Ethernet adaptor to tablets USB OTG connector<br />
# click on Settings and then Connections<br />
# click on More connection settings<br />
# click on Ethernet<br />
# click on Configure Ethernet device<br />
## adjust the settings as shown below (click on image to make it bigger), The DNS address and Default routers settings are not used, but you must set some value or you can not save the configuration.<br />
# Launch Chrome and browse to 10.11.12.13<br />
{|<br />
| [[File:Android-settings.png|250px]]<br />
| [[File:Android-settings-more-connections.png|250px]]<br />
| [[File:Android-settings-configure-ethernet.png|250px]]<br />
|-<br />
| [[File:Android-settings-ethernet-settings.png|250px]]<br />
| [[File:Android-browser.png|250px]]<br />
|}<br />
<br />
== Changing the camera's IP address ==<br />
<br />
For stand alone operations, the camera uses a default IP address:<br />
<br />
<pre style="background:#d6e4f1"><br />
10.11.12.13<br />
</pre><br />
<br />
'''Changing the fixed IP address is an experimental camera feature.''' If you make a mistake [[Multi-function button|reset the camera to factory default values]].<br />
<br />
=== Software release 2.5.2 and newer ===<br />
<br />
See [[Ethernet_networking#User_configured_network_settings|user configured network settings]] above.<br />
<br />
=== Software release 2.4.1 and newer ===<br />
<br />
If the interfaces file has an address other than 10.11.12.13, then the DHCP protocol support is disabled so you can use a fixed IP address on a network with a DHCP server.<br />
<br />
=== Software release 2.2 and newer ===<br />
<br />
It is possible to change the camera's fixed IP address by saving a file named '''interfaces''' in the root directory of the SD card and rebooting the camera TWICE. <br />
<br />
If you have two cameras, store the [http://www.edgertronic.com/releases/interfaces/interfaces.10.11.12.14 interfaces.10.11.12.14 file] on the SD card on the second camera. It will have IP address 10.11.12.14.<br />
<br />
<pre style="background:#d6e4f1"><br />
#enable-dhcp no<br />
<br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto eth0<br />
iface eth0 inet static<br />
address 10.11.12.14<br />
netmask 255.255.255.0<br />
network 10.11.12.0<br />
broadcast 10.11.12.255<br />
</pre><br />
<br />
If you have three cameras, store the following contents into the interfaces file on the SD card on the third camera. It will have IP address 10.11.12.15.<br />
<br />
<pre style="background:#d6e4f1"><br />
#enable-dhcp no<br />
<br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto eth0<br />
iface eth0 inet static<br />
address 10.11.12.15<br />
netmask 255.255.255.0<br />
network 10.11.12.0<br />
broadcast 10.11.12.255<br />
</pre><br />
<br />
<br />
If you can no longer interact with the camera, [[Multi-function button|reset the camera to factory default values]].<br />
<br />
=== Software release 1.7 and older ===<br />
<br />
It is possible to change the camera's fixed IP address. Unfortunately, changing the fixed IP address can not be made via the web interface or by storing a configuration file on the SD card. You need to feel comfortable with command line tools like ''telnet'' and the text editor ''vi''. You also need to understand IP networking concepts such as the network mask. The change is stored in non-volatile memory so you only have to make the change once.<br />
<br />
The camera runs Linux and uses the standard <tt>/etc/network/interfaces</tt> file. The default contents of the interfaces file contains:<br />
<br />
<pre style="background:#d6e4f1"><br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto eth0<br />
iface eth0 inet static<br />
address 10.11.12.13<br />
netmask 255.255.255.0<br />
network 10.11.12.0<br />
broadcast 10.11.12.255<br />
</pre><br />
<br />
You adjust the eth0 settings to use a different fixed IP address. To make the change, you can telnet into the camera as user root (no password) and use the vi editor to modify the file.<br />
<br />
On your computer, bring up a command or terminal window.<br />
<br />
<pre style="background:#d6e4f1"><br />
telnet 10.11.12.13 # or the DHCP IP address <br />
vi /etc/network/interfaces<br />
</pre><br />
<br />
Make your changes, double check everything is correct, and save your changes. Then you can activate your changes<br />
<br />
<pre style="background:#d6e4f1"><br />
ifconfig eth0 down ; ifconfig eth0 up<br />
</pre><br />
<br />
As soon as the camera takes down the eth0 interface, telnet will close. You should then be able to telnet back into the camera using the fixed IP address you configured. If you can no longer interact with the camera, [[Multi-function button|reset the camera to factory default values]].<br />
<br />
= What happens when you plug in an Ethernet cable =<br />
<br />
You don't need to bother reading this. It is a detailed explanation written to remind me of the implementation and to help the camera testers understand what is going on.<br />
<br />
When the camera is booting up, the system LED blinks yellow, indicating the camera is not ready and has not received a DHCP assigned IP address.<br />
<br />
There are four cases to consider:<br />
<br />
== Other end of the cable is disconnected ==<br />
<br />
The definition of the Ethernet standard doesn't allow the camera to detect a cable is plugged in if the other end of the cable is unplugged or plugged into a network switch that is powered off. The short answer is nothing happens. The system LED blinks yellow / blue indicating no Ethernet connection detected.<br />
<br />
== No DHCP server available ==<br />
<br />
The original use case for the edgertronic camera was a laptop directly connected to the camera via an Ethernet cable. To make this as easy as possible, the camera uses a fixed IP address. The IP address is read from the <tt>/etc/network/interfaces file</tt>. The factory default fixed IP address is 10.11.12.13. The fixed IP address is used until a different IP address is provided by a DHCP server. When the camera is using a fixed IP address, the system LED is solid yellow, if the address is the default 10.11.12.13, or the system LED is magenta if a fixed IP address other than the default 10.11.12.13 is being used.<br />
<br />
== DHCP server available ==<br />
<br />
As stated above, the camera assumes it will be using a fixed IP address (and thus the system LED starts out blinking yellow and then typically solid yellow or solid magenta). Once the camera's networking subsystem is alive (around 44 seconds after power on), the DHCP client in the camera requests an IP address. If no DHCP server is available, then the camera continues to use a fixed IP address (and the system LED stays yellow or magenta). If a DHCP server responds to the camera's request for an IP address, then the camera stops using the fixed IP address and switches to the dynamically assigned IP address. The system LED changes to solid blue. The camera creates a file on the SD card where the filename contains the dynamically assigned IP address.<br />
<br />
== Ethernet cable unplugged and plugged back in again ==<br />
<br />
The networking system is defined to handle transient errors, such as a cable being bumped or temporarily rerouted and then plugged back in again. For the camera, this means if you unplug the Ethernet cable and plug in back in again in under 7 seconds, nothing happens. The network protocols used by the camera use a reliable protocol (TCP), so any packets lost while the cable was disconnected are resent. There is one exception. The camera supports UDP multicast network trigger, which, if that packet is lost, means the camera will not trigger as expected. <br />
<br />
If the Ethernet cable is unplugged for more than 7 seconds, the system LED blinks yellow / blue to indicate no Ethernet connection detected, and the networking stack is reset, meaning the camera will revert to using the fixed IP address once a network cable is connected. For normal camera use cases, this seems an odd choice because if the camera was using a DHCP assigned address, shouldn't the camera continue to use that same IP address when the Ethernet cable is reconnected? The answer is no, to maintain compatibility with the networking standards. The reason is the camera can not tell which network it is now connected to. For normal camera use cases, it is always the same network. But the networking system is defined to allow you to disconnect from one network and connect the camera to another network without having to power cycle the camera.<br />
<br />
What this means is if you have a DHCP assigned network address, disconnect the Ethernet cable, wait until the system LED blinks yellow / blue, then reconnect the network cable, you will see the system LED go from blinking yellow / blue, to solid yellow (using a fixed IP address), then after around 5 seconds, the camera again gets a dynamic IP address from the DHCP server and thus the system LED goes solid blue. All modern DHCP servers will give the camera the same dynamic IP address.<br />
<br />
[[Category:Windows]] [[Category:Networking]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Ethernet_networking&diff=5703Ethernet networking2024-02-15T03:38:13Z<p>Tfischer: /* User configurable network settings */</p>
<hr />
<div>== Overview ==<br />
<br />
The edgertronic network configuration documentation is the longest, most detailed documentation for the entire camera. If you misconfigure the camera's network settings, which you typically figure out when you can no longer browse to the camera, then you should perform a [[Multi-function_button#Factory_reset|factory reset]] and try again.<br />
<br />
For the simplest case, you connect a network cable between your laptop and the camera, configure your laptop to use IP address 10.11.12.1 (as described below) and you are ready to us the camera by browsing to http://10.11.12.13 . If this is the first time using the camera, start with this simple configuration so you can first get familiar with the camera before you more on to a more complex networking configuration.<br />
<br />
<blockquote style="background-color: khaki; margin: 1em; padding-left: .5em; padding-right: .5em; border: solid thin gray; width: 50%;"><br />
Hint: to determine the camera's IP address, first verify the [[User_Manual_-_Multicolored_camera_LEDs#System_LED|system LED]] is either yellow, magenta, or blue, indicating the camera has an IP address. Then put the camera's SD card into your computer and check the file names to find the IP address.<br />
</blockquote><br />
<br />
== Network connection and status LEDs ==<br />
<br />
{|<br />
|<br />
There is a standard 10/100 Mbit/sec RJ45 Ethernet jack on the back of the edgertronic high speed camera.<br />
<br />
The camera has two Ethernet related LEDs, located on the Ethernet jack.<br />
<br />
{|class="wikitable"<br />
! LED location !! Ethernet LED !! Meaning<br />
|-<br />
| Back of camera on the Ethernet connector near power connector || Network<br>link and activity || Off - no network connection<br>On - network connection<br>Blinking - network activity, packets being sent or received<br />
|-<br />
| Back of camera on the Ethernet connector near USB connectors || Network<br>10 or 100 Mbit/s || Off - 10 Mbit/s link<br> On - 100 Mbit/s link<br />
|}<br />
<br />
|| [[File:ssc1-rev-d-back-with-black-panel-labeled-leds.jpg|400px|thumb|right|back of edgertronic high speed camera with Ethernet labeled]]<br />
|}<br />
<br />
== Camera IP address ==<br />
<br />
When using your camera, it will be connected to a network. Every device on a network must have an IP address that is unique to that network, including the camera and the laptop or tablet controlling the camera.<br />
<br />
=== Simple instructions ===<br />
<br />
If you have one camera, then you shouldn't need to change the camera's network settings. If your camera is connected directly to your laptop with an Ethernet cable, then the camera's default IP address will be '''10.11.12.13'''.<br />
<br />
To allow you to figure out the camera's IP address, the camera creates a file on the SD card where the IP address is contained in the filename. After the camera LED is solid green, remove the SD card and check the file whose name starts with '''cam_ip_address''' and ends with the camera's IP address. For example, if you are using a DHCP server which assigned an address of 10.111.0.63 to your camera, then you should find the file ''cam_ip_address.10.111.0.63'' on the SD card.<br />
<br />
Once you know the camera's IP address and have properly configured your laptop (instructions later in this article), you can browse to the camera using the Chrome web browser, with a URL like http://10.11.12.13 or http://10.111.0.63<br />
<br />
{{ReplaceIP}}<br />
<br />
=== User configurable network settings ===<br />
<br />
[[File:Settings-network-tab.png|400px|right|thumb|Network settings tab]]<br />
<br />
In software release 2.5.2, we added the ability for the user to configure the network settings via the web user interface.<br />
<br />
If you do not understand TCP/IP network configuration, you may enter the wrong values and then lose the ability to communicate with the camera over the network. When this happens, you will need to do <big>[[Multi-function_button#Factory_reset|factory reset]]</big>. If you call us for help, we will first ask you to do a [[Multi-function_button#Factory_reset|factory reset]].<br />
<br />
To configure the camera's network related settings, click on the wrench [[Image:Settings_button_20150518222117.png|40px]], and when the settings modal appears, click on the PRO button [[Image:Setting-pro-button.png|40px]] at the top, then the Network tab will show up. Finally click on the '''Network tab'''.<br />
<br />
If you are using ''Internet networking'', where your computer and the camera are connected to your existing network infrastructure, then click on the '''DHCP with fallback to fixed address''' button, and ignore the rest of the settings. You will need to either look at the DHCP server log to determine each camera's IP address, or remove the SD card as described in the ''Simple Instructions'' above.<br />
<br />
If you are using multiple cameras in a ''Stand alone networking'' configuration - where your computer is connected to a network switch, and several cameras are connected to the same network switch, then your easiest network configuration option is to use fixed IP addressing, click on the '''Fixed''' button, configuring each camera using these steps:<br />
<br />
# Set your computer's Ethernet address to 10.11.12.1 as described later in this article.<br />
# Put a label on each camera with the IP address you are going to assign. Start with 10.11.12.13, and then increment the last number for each successive camera, e.g. 10.11.12.14, 10.11.12.15, ...<br />
# '''Connect one camera up at a time''' directly to your laptop and browse to http://10.11.12.13 which is the camera's factory reset default IP address when using stand-alone networking.<br />
#* If your browser reports an error, then perform a [[Multi-function_button#Factory_reset|factory reset]] on the connected camera. <br />
# Set the ''Connection type'' to Fixed<br />
# Set the ''IP Address'' to the address on the camera label that you added in the step above.<br />
# Make sure the ''Netmask'' is set to 255.255.255.0<br />
# Typically, leave the ''Gateway'' and ''NTP Server'' values blank or unchanged. Of course if you have a multi-LAN configuration the gateway will need to be set. If you have access to a local stratum 1 [[NTP Network Time Protocol|NTP] server, then setting the NTP server field will allow the camera to provide a more accurate trigger time. <br />
# Click outside the Setting modal to close the modal and activate the settings. Since the camera will have a different IP address, the browser will redirect to the new address you assigned and attempt to re-open the live view page, which takes around 90 seconds. If the browser can not communicate with the camera, make sure:<br />
#* you have only one camera connected,<br />
#* you assigned the IP address on the camera label, and<br />
#* the filename on the SD card contains the IP address you expected.<br />
Perform a [[Multi-function_button#Factory_reset|factory reset]] if you can't communicate with the camera and try these steps again.<br />
<br />
== Ethernet configuration ==<br />
<br />
You have two Ethernet network configuration choices.<br />
<br />
* Internet networking - Connect your computer and the camera to your existing network infrastructure.<br />
* Stand alone networking - Connect a network cable between your laptop and the camera. If you are connecting more than one camera, you will need a network switch and an additional network cable.<br />
<br />
== Ethernet bandwidth ==<br />
<br />
The maximum Ethernet transfer rate for an edgertronic camera is 60 Mbits/sec.<br />
<br />
When the a host computer is uploading a video file from the camera while the camera is busy capturing the next video, the transfer rate is 16 Mbits/sec.<br />
<br />
== Camera connected to DHCP network ==<br />
<br />
Your PC or laptop should already be connected to the existing network that includes a DHCP server, possibly via a wifi connection. If your PC can access the Internet (and no company IT person touched your laptop), then most likely there is a DHCP server on your network as well.<br />
<br />
Your network will assign an IP address to the camera using the DHCP protocol. The camera creates a file on the big SD card with the assigned IP address in the filename. After the camera LED goes solid green, remove the big SD card, insert it into your PC and you will be able to read the IP address of the camera.<br />
<br />
'''If the camera's system LED is solid blue, your camera is using a DHCP assigned IP address.'''<br />
<br />
Once you know the camera's IP address, stick the SD card back in the camera and type the IP address into the Chrome web browser's address bar.<br />
<br />
Using DHCP assigned network addresses works best when the DHCP server remembers the IP address it assigns to devices so that the camera will get the same IP address each time the camera is powered on. Modern DHCP server's work in this manner, so you can put a sticker on the camera with the assigned DHCP IP address.<br />
<br />
== Stand alone networking - laptop to camera networking ==<br />
<br />
If you are using the camera in a location where it is inconvenient to connect to an existing network, you can simply connect a network cable between your laptop and the camera. The camera will detect there is no network infrastructure and configure itself accordingly. You will need to modify your laptop network settings so the laptop can communicate with the camera.<br />
<br />
Your laptop needs to be configured to use IP address '''10.11.12.1'''. If you are familiar with laptop network configuration, you can make the changes now or follow the step-by-step instructions below.<br />
<br />
=== Mac OS X stand alone network configuration===<br />
<br />
Most Mac computers are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to use a fixed IP address when your Mac and camera are connected together using an Ethernet cable.<br />
<br />
Screenshots from Mac OS X 10.11.6.<br />
<br />
* Pull down the Apple menu in the top left corner and select '''System Preferences'''.<br />
[[File:Mac-apple-menu-dropdown-annotated.png|300px|none]]<br />
<br />
* In System Preferences select '''Network'''.<br />
[[File:Mac-system-prefrences-annotated.png|600px|none]]<br />
<br />
* In the left pane of the ''Network'' dialog, select '''Ethernet'''. In the right pane of the ''Network'' dialog, set ''Configure IPv4'' to '''Manually''' configured. Set the IP address to '''10.11.12.1''' and the Subnet Mask to '''255.255.255.0'''. The Router setting is not important - 10.11.12.254 is fine. Then click the ''Apply'' button.<br />
[[File:Mac-network-dialog-annotated.png|600px|none]]<br />
<br />
==== Troubleshooting Mac OS X networking ====<br />
<br />
===== Can not connect to camera after updating Mac OS =====<br />
<br />
Several customers reported after they updated their laptop, they could not longer connect to the camera. The problem is the Apple update process can corrupt your network settings.<br />
<br />
One solution is to connect the camera to the computer and then delete the network adapter setting that was causing problems. Reboot laptop and then re-create the network setup.<br />
<br />
Once your laptop is configured, you can browse to the camera using the URL:<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
=== Ubuntu OS stand alone network configuration ===<br />
<br />
*Ubuntu Version: 20.04<br />
<br />
Please follow the below mentioned simple steps to change the configuration to use a fixed IP address when your computer and camera are connected together using an Ethernet cable:<br />
<br />
*Go to 'Settings' -> 'Network' and enable the 'Wired' option by clicking the button (it looks like a switch) if it is 'off' by default.<br />
*Click the small 'setting' icon (it looks like a 'bearing' sign) very next to the button(switch) mentioned above.<br />
*You can see a pop-up modal form with the caption 'Wired' with options for configuring 'IPv4', 'IPv6', and other parameters.<br />
*Click 'IPv4', select 'IPv4 Method' as 'Manual', and under the 'Addresses' tab, enter the following IP numbers:<br />
** Address: 10.11.12.1<br />
** Netmask : 255.255.255.0<br />
** Gateway : ''leave blank''<br />
*Choose 'Automatic' for 'DNS' and 'Routers' configs in the same modal box.<br />
<br />
*Click'Apply' and close the modal box.<br />
*Next, you can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), just only one change to 'IPv4 Method' as 'Automatic(DHCP)' from 'Manual'.<br />
<br />
=== Windows 11 stand alone network configuration ===<br />
<br />
Most computers running Windows 11 are configured to allow Ethernet to work automatically over an Internet connection (meaning the IP address is assigned by a DHCP server). You need to change the Windows 11 configuration to use a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Slide your mouse to the bottom of the screen, and click on the search icon which is just to the right of the 4 pane blue window icon.<br />
* Type '''network''', then click on the Control Panel icon.<br />
[[File:Win10a-control-panel.png|600px|none]]<br />
<br><br />
* In the left panel of the ''Network and Sharing Center'', double-click on '''Change adapter settings'''.<br />
[[File:Win11b-network-and-sharing-center.png|600px|none]]<br />
<br><br />
* In the ''Network Connections'' window, double-click on '''Local Area Network'''.<br />
[[File:Win10c-network-connections.png|600px|none]]<br />
<br><br />
* In the "Local Area Network Connection Status" window, press the '''Properties''' button.<br />
[[File:Win11d-local-area-connecton-status.png|200px|none]]<br />
<br><br />
* In the "Local Area Connection Properties" window, select '''Internet Protocol version 4 (TCP/IPv4)''' and press the '''Properties''' button.<br />
[[File:Win11e-local-area-connection-properties.png|200px|none]]<br />
<br><br />
* Select '''Use the following IP address:''' and enter the following settings<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
[[File:Win11f-internet-protocol-version-4-tcpip-properties.png|200px|none]]<br />
<br />
<br><br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
Many thanks to Piroz for providing the Windows 11 screen shots used above.<br />
<br />
=== Windows 10 stand alone network configuration ===<br />
<br />
Most computers running Windows 10 are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to use a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Slide your mouse to the bottom of the screen, and click inside the search box which is just to the right of the 4 pane blue window icon.<br />
* Type '''network''', then click on the Control Panel icon.<br />
[[File:Win10a-control-panel.png|600px|none]]<br />
<br><br />
* In the left panel of the ''Network and Sharing Center'', double-click on '''Change adapter settings'''.<br />
[[File:Win10b-network-and-sharing-center.png|600px|none]]<br />
<br><br />
* In the ''Network Connections'' window, double-click on '''Local Area Network'''.<br />
[[File:Win10c-network-connections.png|600px|none]]<br />
<br><br />
* In the "Local Area Network Connection Status" window, press the '''Properties''' button.<br />
[[File:Win10d-local-area-connecton-status.png|200px|none]]<br />
<br><br />
* In the "Local Area Connection Properties" window, select '''Internet Protocol version 4 (TCP/IPv4)''' and press the '''Properties''' button.<br />
[[File:Win10e-local-area-connection-properties.png|200px|none]]<br />
<br><br />
* Select '''Use the following IP address:''' and enter the following settings<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
[[File:Win10f-internet-protocol-version-4-tcpip-properties.png|200px|none]]<br />
<br><br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
=== Windows 8 stand alone network configuration ===<br />
<br />
Most computers running Windows 8 are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to use a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Slide your mouse to the upper right corner to bring up the ''charms bar'' and select '''Start'''.<br />
* Type '''network''' to bring up the Network browser window.<br />
* Select the '''Network''' tab and then the '''Properties''' icon.<br />
[[File:Win8-network-window-annotated.png|600px|none]]<br />
<br><br />
* In the left panel of the ''Network and Sharing Center'', select '''Change adapter settings'''.<br />
[[File:Win8-network-and-sharing-center-window-annotated.png|600px|none]]<br />
<br><br />
* In the ''Network Connections'' window, select '''Ethernet'''.<br />
[[File:Win8-network-connections-window-annotated.png|600px|none]]<br />
<br><br />
* In ''Ethernet Properties'', select '''Internet Protocol version 4 (TCP/IPv4)''' and press the '''Properties''' button.<br />
[[File:Win8-ethernet-properties-dialog-annotated.png|300px|none]]<br />
<br><br />
* Select '''Use the following IP address:''' and enter the following settings<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
[[File:Win8-internet-version-4-tcp-ipv4-properties-dialog-annotated.png|300px|none]]<br />
<br><br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
<br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
=== Windows 7 stand alone network configuration ===<br />
<br />
Most computers running Windows 7 are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to used a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Click on the Start icon in the lower left corner.<br />
<br />
[[File:Win-7-network-start-button.png]] <br />
<br />
* Click on Control Panel on the right side.<br />
<br />
[[File:Win-7-network-control-panel-select.png|300px]]<br />
<br />
* Click on Network and Internet.<br />
<br />
[[File:Win-7-network-control-panel.png|500px]]<br />
<br />
* Click on Network and Sharing Center.<br />
<br />
[[File:Win-7-network-control-panel-network-and-internet.png|500px]]<br />
<br />
* Click on Change adapter settings.<br />
<br />
[[File:Win-7-network-control-panel-network-and-sharing.png|500px]]<br />
<br />
* Click on Local Area Connection.<br />
<br />
[[File:Win-7-network-control-panel-adapter-settings.png|500px]]<br />
<br />
* Click on Properties.<br />
<br />
[[File:Win-7-network-control-panel-local-area-network.png|500px]]<br />
<br />
* Click on Internet Protocol Version 4 (TCP/IPv4).<br />
<br />
[[File:Win-7-network-control-panel-network-adapter-properties.png|500px]]<br />
<br />
*Set the following values:<br />
<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
<br />
[[File:Win-7-network-control-panel-network-adapter-tcpv4-settings.png|500px]]<br />
<br />
<br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
<br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
=== Android stand alone Ethernet network configuration ===<br />
<br />
A customer asked if an Ethernet dongle to a Samsung Galaxy Android tablet would work. "It should" was my reply, along with finding and buying the [https://www.amazon.com/Plugable-Ethernet-Compatible-Raspberry-AX88772A/dp/B00RM3KXAU Plugable USB 2.0 OTG Micro-B Ethernet Adaptor] for around $14 so I could run a quick test. Years ago I met Bernie, the owner of Plugable at an embedded Linux conference. I became a fan as Plugable cares about Linux support and we all know Android runs on top of Linux. We all know the edgertronic camera runs Linux too. My office is full of Plugable gear.<br />
<br />
I had it working in 10 minutes. Here are the steps I followed:<br />
<br />
# attach Plugable Ethernet adaptor to tablets USB OTG connector<br />
# click on Settings and then Connections<br />
# click on More connection settings<br />
# click on Ethernet<br />
# click on Configure Ethernet device<br />
## adjust the settings as shown below (click on image to make it bigger), The DNS address and Default routers settings are not used, but you must set some value or you can not save the configuration.<br />
# Launch Chrome and browse to 10.11.12.13<br />
{|<br />
| [[File:Android-settings.png|250px]]<br />
| [[File:Android-settings-more-connections.png|250px]]<br />
| [[File:Android-settings-configure-ethernet.png|250px]]<br />
|-<br />
| [[File:Android-settings-ethernet-settings.png|250px]]<br />
| [[File:Android-browser.png|250px]]<br />
|}<br />
<br />
== Changing the camera's IP address ==<br />
<br />
For stand alone operations, the camera uses a default IP address:<br />
<br />
<pre style="background:#d6e4f1"><br />
10.11.12.13<br />
</pre><br />
<br />
'''Changing the fixed IP address is an experimental camera feature.''' If you make a mistake [[Multi-function button|reset the camera to factory default values]].<br />
<br />
=== Software release 2.5.2 and newer ===<br />
<br />
See [[Ethernet_networking#User_configured_network_settings|user configured network settings]] above.<br />
<br />
=== Software release 2.4.1 and newer ===<br />
<br />
If the interfaces file has an address other than 10.11.12.13, then the DHCP protocol support is disabled so you can use a fixed IP address on a network with a DHCP server.<br />
<br />
=== Software release 2.2 and newer ===<br />
<br />
It is possible to change the camera's fixed IP address by saving a file named '''interfaces''' in the root directory of the SD card and rebooting the camera TWICE. <br />
<br />
If you have two cameras, store the [http://www.edgertronic.com/releases/interfaces/interfaces.10.11.12.14 interfaces.10.11.12.14 file] on the SD card on the second camera. It will have IP address 10.11.12.14.<br />
<br />
<pre style="background:#d6e4f1"><br />
#enable-dhcp no<br />
<br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto eth0<br />
iface eth0 inet static<br />
address 10.11.12.14<br />
netmask 255.255.255.0<br />
network 10.11.12.0<br />
broadcast 10.11.12.255<br />
</pre><br />
<br />
If you have three cameras, store the following contents into the interfaces file on the SD card on the third camera. It will have IP address 10.11.12.15.<br />
<br />
<pre style="background:#d6e4f1"><br />
#enable-dhcp no<br />
<br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto eth0<br />
iface eth0 inet static<br />
address 10.11.12.15<br />
netmask 255.255.255.0<br />
network 10.11.12.0<br />
broadcast 10.11.12.255<br />
</pre><br />
<br />
<br />
If you can no longer interact with the camera, [[Multi-function button|reset the camera to factory default values]].<br />
<br />
=== Software release 1.7 and older ===<br />
<br />
It is possible to change the camera's fixed IP address. Unfortunately, changing the fixed IP address can not be made via the web interface or by storing a configuration file on the SD card. You need to feel comfortable with command line tools like ''telnet'' and the text editor ''vi''. You also need to understand IP networking concepts such as the network mask. The change is stored in non-volatile memory so you only have to make the change once.<br />
<br />
The camera runs Linux and uses the standard <tt>/etc/network/interfaces</tt> file. The default contents of the interfaces file contains:<br />
<br />
<pre style="background:#d6e4f1"><br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto eth0<br />
iface eth0 inet static<br />
address 10.11.12.13<br />
netmask 255.255.255.0<br />
network 10.11.12.0<br />
broadcast 10.11.12.255<br />
</pre><br />
<br />
You adjust the eth0 settings to use a different fixed IP address. To make the change, you can telnet into the camera as user root (no password) and use the vi editor to modify the file.<br />
<br />
On your computer, bring up a command or terminal window.<br />
<br />
<pre style="background:#d6e4f1"><br />
telnet 10.11.12.13 # or the DHCP IP address <br />
vi /etc/network/interfaces<br />
</pre><br />
<br />
Make your changes, double check everything is correct, and save your changes. Then you can activate your changes<br />
<br />
<pre style="background:#d6e4f1"><br />
ifconfig eth0 down ; ifconfig eth0 up<br />
</pre><br />
<br />
As soon as the camera takes down the eth0 interface, telnet will close. You should then be able to telnet back into the camera using the fixed IP address you configured. If you can no longer interact with the camera, [[Multi-function button|reset the camera to factory default values]].<br />
<br />
= What happens when you plug in an Ethernet cable =<br />
<br />
You don't need to bother reading this. It is a detailed explanation written to remind me of the implementation and to help the camera testers understand what is going on.<br />
<br />
When the camera is booting up, the system LED blinks yellow, indicating the camera is not ready and has not received a DHCP assigned IP address.<br />
<br />
There are four cases to consider:<br />
<br />
== Other end of the cable is disconnected ==<br />
<br />
The definition of the Ethernet standard doesn't allow the camera to detect a cable is plugged in if the other end of the cable is unplugged or plugged into a network switch that is powered off. The short answer is nothing happens. The system LED blinks yellow / blue indicating no Ethernet connection detected.<br />
<br />
== No DHCP server available ==<br />
<br />
The original use case for the edgertronic camera was a laptop directly connected to the camera via an Ethernet cable. To make this as easy as possible, the camera uses a fixed IP address. The IP address is read from the <tt>/etc/network/interfaces file</tt>. The factory default fixed IP address is 10.11.12.13. The fixed IP address is used until a different IP address is provided by a DHCP server. When the camera is using a fixed IP address, the system LED is solid yellow, if the address is the default 10.11.12.13, or the system LED is magenta if a fixed IP address other than the default 10.11.12.13 is being used.<br />
<br />
== DHCP server available ==<br />
<br />
As stated above, the camera assumes it will be using a fixed IP address (and thus the system LED starts out blinking yellow and then typically solid yellow or solid magenta). Once the camera's networking subsystem is alive (around 44 seconds after power on), the DHCP client in the camera requests an IP address. If no DHCP server is available, then the camera continues to use a fixed IP address (and the system LED stays yellow or magenta). If a DHCP server responds to the camera's request for an IP address, then the camera stops using the fixed IP address and switches to the dynamically assigned IP address. The system LED changes to solid blue. The camera creates a file on the SD card where the filename contains the dynamically assigned IP address.<br />
<br />
== Ethernet cable unplugged and plugged back in again ==<br />
<br />
The networking system is defined to handle transient errors, such as a cable being bumped or temporarily rerouted and then plugged back in again. For the camera, this means if you unplug the Ethernet cable and plug in back in again in under 7 seconds, nothing happens. The network protocols used by the camera use a reliable protocol (TCP), so any packets lost while the cable was disconnected are resent. There is one exception. The camera supports UDP multicast network trigger, which, if that packet is lost, means the camera will not trigger as expected. <br />
<br />
If the Ethernet cable is unplugged for more than 7 seconds, the system LED blinks yellow / blue to indicate no Ethernet connection detected, and the networking stack is reset, meaning the camera will revert to using the fixed IP address once a network cable is connected. For normal camera use cases, this seems an odd choice because if the camera was using a DHCP assigned address, shouldn't the camera continue to use that same IP address when the Ethernet cable is reconnected? The answer is no, to maintain compatibility with the networking standards. The reason is the camera can not tell which network it is now connected to. For normal camera use cases, it is always the same network. But the networking system is defined to allow you to disconnect from one network and connect the camera to another network without having to power cycle the camera.<br />
<br />
What this means is if you have a DHCP assigned network address, disconnect the Ethernet cable, wait until the system LED blinks yellow / blue, then reconnect the network cable, you will see the system LED go from blinking yellow / blue, to solid yellow (using a fixed IP address), then after around 5 seconds, the camera again gets a dynamic IP address from the DHCP server and thus the system LED goes solid blue. All modern DHCP servers will give the camera the same dynamic IP address.<br />
<br />
[[Category:Windows]] [[Category:Networking]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Main_Page&diff=5702Main Page2024-02-15T03:36:26Z<p>Tfischer: </p>
<hr />
<div>{|<br />
|-<br />
| width="70%" valign="top" class="mainpage_hubbox" |<br />
{{#css:MainPage.css}}{{DISPLAYTITLE:<span style="position: absolute; clip: rect(1px 1px 1px 1px); clip: rect(1px, 1px, 1px, 1px);">{{FULLPAGENAME}}</span>}}<br />
<div style="font-size: 200%; line-height: 120%;">edgertronic - A Microscope for Time</div><br><br />
<br />
[[File:Start-here.svg|50]]<span style="font-size: 150%">[[Quick start guide|Quick Start Guide]]</span><br><br />
[[File:Start-here.svg|50]]<span style="font-size: 150%">[[User Manual|Edgertronic high speed camera user manual]]</span><br><br />
<br />
{|<br />
|- <br />
| <span style="color:#662627;">'''About the edgertronic camera'''</span><br />
{{Main page/AboutCamera}} <br />
<span style="color:#662627;">'''Videos and Publicity'''</span><br />
{{Main page/Videos}}<br />
<span style="color:#662627;">'''Fine Print'''</span><br />
{{Main page/Business}}<br />
| valign="top" |<span style="color:#662627;"> '''Troubleshooting'''</span><br />
{{Main page/Troubleshooting}} <br />
<span style="color:#662627;">'''Tips and Techniques'''</span><br />
{{Main page/TipsAndTechniques}}<br />
<span style="color:#662627;">'''For Developers'''</span><br />
{{Main page/Developers}}<br />
|}<br />
''<font size="4"><span style="color:#003300;font-size:110%;">'''An honest camera at an honest price. No tricks, no gimmicks.'''</span></font>''<br><br />
''<span style="color:#662627; font-size: 175%">'''<u>Contact Us</u>'''</span>''<br />
<div style="width:50%">{{ContactUs}}</div><br />
| valign="top" class="mainpage_hubbox" |<div class="mainpage_hubtitle"><div class="left"><br />
<!-- --><br />
<span style="color:aqua; background:black; font-size: 150%">Recent News..</span><br><br><br />
We have been repairing cameras after the <span style="color:red">wrong power adaptor</span> was used. Don't do it! Use only [[Turning the edgertronic camera on and off#Power_Supply_Compatibility|approved power adaptors]].<br />
* Specifically, don't connect a Trackman power supply to an edgertronic camera. The repair bill is over $1,500.<br><br />
[[Software releases|<span style="color:#662627;">'''''<u>Version 2.5.3</u>'''''</span>]]<span style="color:#662627;"> software release.</span><br><br />
[[Updating camera software|<span style="color:#662627;">'''''<u>Update</u>'''''</span>]]<span style="color:#662627;"> your camera now!.</span><br><br />
[[iPad Issues|<span style="color:#662627;">'''''<u>iPads running iPadOS 14.6</u>'''''</span>]]<span style="color:#662627;"> require 2.5.1rc20 or higher.</span><br><br />
* [[Auto exposure]] - Automatically adjust the camera to maintain a correct exposure<br />
* [[Ethernet_networking#User configurable network settings|WebUI network configuration]]<br />
* [[Edgertronic reliability - Don't try this at home|Real world camera survival stories]]<br />
[[Image:Ssc1-rev-d-front.jpg|thumbnail|none|300px|alt=edgertronic_camera_with_nikon_AF_NIKKOR_50mm_1:1.8_D |<span style="color:#663399;">'''Edgertronic SC2 high speed camera with 50mm lens'''</span>]]<br />
<span style="color:#006600;">'''Customer Quotes'''</span>: <br />
<br />
<span style="font-style:italic;color:#990AD2;"><br />
"We prefer to use our Edgertronic cameras instead since<br><br />
they are more user-friendly and would facilitate<br><br />
collaboration with the other labs we are working<br><br />
with because they require no additional software<br><br />
other than a web browser."</span><br />
<br />
Nils T., Research Biologist.<br />
<br />
<span style="font-style:italic;color:#990AD2;"><br />
"Perfect! Thanks so much for:<br><br />
Making an awesome product<br><br />
Supporting in an awesome manner<br><br />
Answering my email in an uber-prompt fashion!"</span><br />
<br />
Paul S., aerospace industry<br />
<br />
<span style="font-style:italic;color:#990AD2;"><br />
The camera has been installed for almost a year and<br><br />
we’ve solved many issues on the cut over sequence<br><br />
of the machine. We are projecting to save over<br><br />
$300,000 dollars of waste this year.The camera played<br><br />
a big part in our goal. You guys have created a<br><br />
great camera.</span><br />
<br />
Andrew S., metal manufacturing industry<br />
<br />
|}</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Main_Page&diff=5701Main Page2024-02-15T03:35:57Z<p>Tfischer: </p>
<hr />
<div>{|<br />
|-<br />
| width="70%" valign="top" class="mainpage_hubbox" |<br />
{{#css:MainPage.css}}{{DISPLAYTITLE:<span style="position: absolute; clip: rect(1px 1px 1px 1px); clip: rect(1px, 1px, 1px, 1px);">{{FULLPAGENAME}}</span>}}<br />
<div style="font-size: 200%; line-height: 120%;">edgertronic - A Microscope for Time</div><br><br />
<br />
[[File:Start-here.svg|50]]<span style="font-size: 150%">[[Quick start guide|Quick Start Guide]]</span><br><br />
[[File:Start-here.svg|50]]<span style="font-size: 150%">[[User Manual|Edgertronic high speed camera user manual]]</span><br><br />
<br />
{|<br />
|- <br />
| <span style="color:#662627;">'''About the edgertronic camera'''</span><br />
{{Main page/AboutCamera}} <br />
<span style="color:#662627;">'''Videos and Publicity'''</span><br />
{{Main page/Videos}}<br />
<span style="color:#662627;">'''Fine Print'''</span><br />
{{Main page/Business}}<br />
| valign="top" |<span style="color:#662627;"> '''Troubleshooting'''</span><br />
{{Main page/Troubleshooting}} <br />
<span style="color:#662627;">'''Tips and Techniques'''</span><br />
{{Main page/TipsAndTechniques}}<br />
<span style="color:#662627;">'''For Developers'''</span><br />
{{Main page/Developers}}<br />
|}<br />
''<font size="4"><span style="color:#003300;font-size:110%;">'''An honest camera at an honest price. No tricks, no gimmicks.'''</span></font>''<br><br />
''<span style="color:#662627; font-size: 175%">'''<u>Contact Us</u>'''</span>''<br />
<div style="width:50%">{{ContactUs}}</div><br />
| valign="top" class="mainpage_hubbox" |<div class="mainpage_hubtitle"><div class="left"><br />
<!-- --><br />
<span style="color:aqua; background:black; font-size: 150%">Recent News..</span><br><br><br />
We have been repairing cameras after the <span style="color:red">wrong power adaptor</span> was used. Don't do it! Use only [[Turning the edgertronic camera on and off#Power_Supply_Compatibility|approved power adaptors]].<br />
* Specifically, don't connect a Trackman power supply to an edgertronic camera. The repair bill is over $1,500.<br><br />
[[Software releases|<span style="color:#662627;">'''''<u>Version 2.5.3</u>'''''</span>]]<span style="color:#662627;"> software release.</span><br><br />
[[Updating camera software|<span style="color:#662627;">'''''<u>Update</u>'''''</span>]]<span style="color:#662627;"> your camera now!.</span><br><br />
[[iPad Issues|<span style="color:#662627;">'''''<u>iPads running iPadOS 14.6</u>'''''</span>]]<span style="color:#662627;"> require 2.5.1rc20 or higher.</span><br><br />
* [[Auto exposure]] - Automatically adjust the camera to maintain a correct exposure<br />
* [[Ethernet_networking#User configured network settings|WebUI network configuration]]<br />
* [[Edgertronic reliability - Don't try this at home|Real world camera survival stories]]<br />
[[Image:Ssc1-rev-d-front.jpg|thumbnail|none|300px|alt=edgertronic_camera_with_nikon_AF_NIKKOR_50mm_1:1.8_D |<span style="color:#663399;">'''Edgertronic SC2 high speed camera with 50mm lens'''</span>]]<br />
<span style="color:#006600;">'''Customer Quotes'''</span>: <br />
<br />
<span style="font-style:italic;color:#990AD2;"><br />
"We prefer to use our Edgertronic cameras instead since<br><br />
they are more user-friendly and would facilitate<br><br />
collaboration with the other labs we are working<br><br />
with because they require no additional software<br><br />
other than a web browser."</span><br />
<br />
Nils T., Research Biologist.<br />
<br />
<span style="font-style:italic;color:#990AD2;"><br />
"Perfect! Thanks so much for:<br><br />
Making an awesome product<br><br />
Supporting in an awesome manner<br><br />
Answering my email in an uber-prompt fashion!"</span><br />
<br />
Paul S., aerospace industry<br />
<br />
<span style="font-style:italic;color:#990AD2;"><br />
The camera has been installed for almost a year and<br><br />
we’ve solved many issues on the cut over sequence<br><br />
of the machine. We are projecting to save over<br><br />
$300,000 dollars of waste this year.The camera played<br><br />
a big part in our goal. You guys have created a<br><br />
great camera.</span><br />
<br />
Andrew S., metal manufacturing industry<br />
<br />
|}</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Ethernet_networking&diff=5700Ethernet networking2024-02-15T03:34:53Z<p>Tfischer: /* User configured network settings */</p>
<hr />
<div>== Overview ==<br />
<br />
The edgertronic network configuration documentation is the longest, most detailed documentation for the entire camera. If you misconfigure the camera's network settings, which you typically figure out when you can no longer browse to the camera, then you should perform a [[Multi-function_button#Factory_reset|factory reset]] and try again.<br />
<br />
For the simplest case, you connect a network cable between your laptop and the camera, configure your laptop to use IP address 10.11.12.1 (as described below) and you are ready to us the camera by browsing to http://10.11.12.13 . If this is the first time using the camera, start with this simple configuration so you can first get familiar with the camera before you more on to a more complex networking configuration.<br />
<br />
<blockquote style="background-color: khaki; margin: 1em; padding-left: .5em; padding-right: .5em; border: solid thin gray; width: 50%;"><br />
Hint: to determine the camera's IP address, first verify the [[User_Manual_-_Multicolored_camera_LEDs#System_LED|system LED]] is either yellow, magenta, or blue, indicating the camera has an IP address. Then put the camera's SD card into your computer and check the file names to find the IP address.<br />
</blockquote><br />
<br />
== Network connection and status LEDs ==<br />
<br />
{|<br />
|<br />
There is a standard 10/100 Mbit/sec RJ45 Ethernet jack on the back of the edgertronic high speed camera.<br />
<br />
The camera has two Ethernet related LEDs, located on the Ethernet jack.<br />
<br />
{|class="wikitable"<br />
! LED location !! Ethernet LED !! Meaning<br />
|-<br />
| Back of camera on the Ethernet connector near power connector || Network<br>link and activity || Off - no network connection<br>On - network connection<br>Blinking - network activity, packets being sent or received<br />
|-<br />
| Back of camera on the Ethernet connector near USB connectors || Network<br>10 or 100 Mbit/s || Off - 10 Mbit/s link<br> On - 100 Mbit/s link<br />
|}<br />
<br />
|| [[File:ssc1-rev-d-back-with-black-panel-labeled-leds.jpg|400px|thumb|right|back of edgertronic high speed camera with Ethernet labeled]]<br />
|}<br />
<br />
== Camera IP address ==<br />
<br />
When using your camera, it will be connected to a network. Every device on a network must have an IP address that is unique to that network, including the camera and the laptop or tablet controlling the camera.<br />
<br />
=== Simple instructions ===<br />
<br />
If you have one camera, then you shouldn't need to change the camera's network settings. If your camera is connected directly to your laptop with an Ethernet cable, then the camera's default IP address will be '''10.11.12.13'''.<br />
<br />
To allow you to figure out the camera's IP address, the camera creates a file on the SD card where the IP address is contained in the filename. After the camera LED is solid green, remove the SD card and check the file whose name starts with '''cam_ip_address''' and ends with the camera's IP address. For example, if you are using a DHCP server which assigned an address of 10.111.0.63 to your camera, then you should find the file ''cam_ip_address.10.111.0.63'' on the SD card.<br />
<br />
Once you know the camera's IP address and have properly configured your laptop (instructions later in this article), you can browse to the camera using the Chrome web browser, with a URL like http://10.11.12.13 or http://10.111.0.63<br />
<br />
{{ReplaceIP}}<br />
<br />
=== User configurable network settings ===<br />
<br />
[[File:Settings-network-tab.png|400px|right|thumb|Network settings tab]]<br />
<br />
In software release 2.5.2, we added the ability for the user to configure the network settings via the web user interface.<br />
<br />
If you do not understand TCP/IP network configuration, you may enter the wrong values and then lose the ability to communicate with the camera over the network. When this happens, you will need to do <big>[[Multi-function_button#Factory_reset|factory reset]]</big>. If you call us for help, we will first ask you to do a [[Multi-function_button#Factory_reset|factory reset]].<br />
<br />
To configure the camera's network related settings, click on the wrench [[Image:Settings_button_20150518222117.png|40px]], and when the settings modal appears, click on the PRO button [[Image:Setting-pro-button.png|40px]] at the top, then the Network tab will show up. Finally click on the '''Network tab'''.<br />
<br />
If you are using ''Internet networking'', where your computer and the camera are connected to your existing network infrastructure, then click on the '''DHCP with fallback to fixed address''' button, and ignore the rest of the settings. You will need to either look at the DHCP server log to determine each camera's IP address, or remove the SD card as described in the ''Simple Instructions'' above.<br />
<br />
If you are using multiple cameras in a ''Stand alone networking'' configuration - where your computer is connected to a network switch, and several cameras are connected to the same network switch, then your easiest network configuration option is to use fixed IP addressing, click on the '''Fixed''' button, configuring each camera using these steps:<br />
<br />
# Set your computer's Ethernet address to 10.11.12.1 as described later in this article.<br />
# Put a label on each camera with the IP address you are going to assign. Start with 10.11.12.13, and then increment the last number for each successive camera, e.g. 10.11.12.14, 10.11.12.15, ...<br />
# '''Connect one camera up at a time''' directly to your laptop and browse to http://10.11.12.13 which is the camera's factory reset default IP address when using stand-alone networking.<br />
#* If your browser reports an error, then perform a [[Multi-function_button#Factory_reset|factory reset]] on the connected camera. <br />
# Set the ''Connection type'' to Fixed<br />
# Set the ''IP Address'' to the address on the camera label that you added in the step above.<br />
# Make sure the ''Netmask'' is set to 255.255.255.0<br />
# Typically, leave the ''Gateway'' and ''NTP Server'' values blank or unchanged. Of course if you have a multi-LAN configuration the gateway will need to be set. If you have access to a local stratum 1 NTP server, then setting the NTP server field will allow the camera to provide a more accurate trigger time. <br />
# Click outside the Setting modal to close the modal and activate the settings. Since the camera will have a different IP address, the browser will redirect to the new address you assigned and attempt to re-open the live view page, which takes around 90 seconds. If the browser can not communicate with the camera, make sure:<br />
#* you have only one camera connected,<br />
#* you assigned the IP address on the camera label, and<br />
#* the filename on the SD card contains the IP address you expected.<br />
Perform a [[Multi-function_button#Factory_reset|factory reset]] if you can't communicate with the camera and try these steps again.<br />
<br />
== Ethernet configuration ==<br />
<br />
You have two Ethernet network configuration choices.<br />
<br />
* Internet networking - Connect your computer and the camera to your existing network infrastructure.<br />
* Stand alone networking - Connect a network cable between your laptop and the camera. If you are connecting more than one camera, you will need a network switch and an additional network cable.<br />
<br />
== Ethernet bandwidth ==<br />
<br />
The maximum Ethernet transfer rate for an edgertronic camera is 60 Mbits/sec.<br />
<br />
When the a host computer is uploading a video file from the camera while the camera is busy capturing the next video, the transfer rate is 16 Mbits/sec.<br />
<br />
== Camera connected to DHCP network ==<br />
<br />
Your PC or laptop should already be connected to the existing network that includes a DHCP server, possibly via a wifi connection. If your PC can access the Internet (and no company IT person touched your laptop), then most likely there is a DHCP server on your network as well.<br />
<br />
Your network will assign an IP address to the camera using the DHCP protocol. The camera creates a file on the big SD card with the assigned IP address in the filename. After the camera LED goes solid green, remove the big SD card, insert it into your PC and you will be able to read the IP address of the camera.<br />
<br />
'''If the camera's system LED is solid blue, your camera is using a DHCP assigned IP address.'''<br />
<br />
Once you know the camera's IP address, stick the SD card back in the camera and type the IP address into the Chrome web browser's address bar.<br />
<br />
Using DHCP assigned network addresses works best when the DHCP server remembers the IP address it assigns to devices so that the camera will get the same IP address each time the camera is powered on. Modern DHCP server's work in this manner, so you can put a sticker on the camera with the assigned DHCP IP address.<br />
<br />
== Stand alone networking - laptop to camera networking ==<br />
<br />
If you are using the camera in a location where it is inconvenient to connect to an existing network, you can simply connect a network cable between your laptop and the camera. The camera will detect there is no network infrastructure and configure itself accordingly. You will need to modify your laptop network settings so the laptop can communicate with the camera.<br />
<br />
Your laptop needs to be configured to use IP address '''10.11.12.1'''. If you are familiar with laptop network configuration, you can make the changes now or follow the step-by-step instructions below.<br />
<br />
=== Mac OS X stand alone network configuration===<br />
<br />
Most Mac computers are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to use a fixed IP address when your Mac and camera are connected together using an Ethernet cable.<br />
<br />
Screenshots from Mac OS X 10.11.6.<br />
<br />
* Pull down the Apple menu in the top left corner and select '''System Preferences'''.<br />
[[File:Mac-apple-menu-dropdown-annotated.png|300px|none]]<br />
<br />
* In System Preferences select '''Network'''.<br />
[[File:Mac-system-prefrences-annotated.png|600px|none]]<br />
<br />
* In the left pane of the ''Network'' dialog, select '''Ethernet'''. In the right pane of the ''Network'' dialog, set ''Configure IPv4'' to '''Manually''' configured. Set the IP address to '''10.11.12.1''' and the Subnet Mask to '''255.255.255.0'''. The Router setting is not important - 10.11.12.254 is fine. Then click the ''Apply'' button.<br />
[[File:Mac-network-dialog-annotated.png|600px|none]]<br />
<br />
==== Troubleshooting Mac OS X networking ====<br />
<br />
===== Can not connect to camera after updating Mac OS =====<br />
<br />
Several customers reported after they updated their laptop, they could not longer connect to the camera. The problem is the Apple update process can corrupt your network settings.<br />
<br />
One solution is to connect the camera to the computer and then delete the network adapter setting that was causing problems. Reboot laptop and then re-create the network setup.<br />
<br />
Once your laptop is configured, you can browse to the camera using the URL:<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
=== Ubuntu OS stand alone network configuration ===<br />
<br />
*Ubuntu Version: 20.04<br />
<br />
Please follow the below mentioned simple steps to change the configuration to use a fixed IP address when your computer and camera are connected together using an Ethernet cable:<br />
<br />
*Go to 'Settings' -> 'Network' and enable the 'Wired' option by clicking the button (it looks like a switch) if it is 'off' by default.<br />
*Click the small 'setting' icon (it looks like a 'bearing' sign) very next to the button(switch) mentioned above.<br />
*You can see a pop-up modal form with the caption 'Wired' with options for configuring 'IPv4', 'IPv6', and other parameters.<br />
*Click 'IPv4', select 'IPv4 Method' as 'Manual', and under the 'Addresses' tab, enter the following IP numbers:<br />
** Address: 10.11.12.1<br />
** Netmask : 255.255.255.0<br />
** Gateway : ''leave blank''<br />
*Choose 'Automatic' for 'DNS' and 'Routers' configs in the same modal box.<br />
<br />
*Click'Apply' and close the modal box.<br />
*Next, you can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), just only one change to 'IPv4 Method' as 'Automatic(DHCP)' from 'Manual'.<br />
<br />
=== Windows 11 stand alone network configuration ===<br />
<br />
Most computers running Windows 11 are configured to allow Ethernet to work automatically over an Internet connection (meaning the IP address is assigned by a DHCP server). You need to change the Windows 11 configuration to use a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Slide your mouse to the bottom of the screen, and click on the search icon which is just to the right of the 4 pane blue window icon.<br />
* Type '''network''', then click on the Control Panel icon.<br />
[[File:Win10a-control-panel.png|600px|none]]<br />
<br><br />
* In the left panel of the ''Network and Sharing Center'', double-click on '''Change adapter settings'''.<br />
[[File:Win11b-network-and-sharing-center.png|600px|none]]<br />
<br><br />
* In the ''Network Connections'' window, double-click on '''Local Area Network'''.<br />
[[File:Win10c-network-connections.png|600px|none]]<br />
<br><br />
* In the "Local Area Network Connection Status" window, press the '''Properties''' button.<br />
[[File:Win11d-local-area-connecton-status.png|200px|none]]<br />
<br><br />
* In the "Local Area Connection Properties" window, select '''Internet Protocol version 4 (TCP/IPv4)''' and press the '''Properties''' button.<br />
[[File:Win11e-local-area-connection-properties.png|200px|none]]<br />
<br><br />
* Select '''Use the following IP address:''' and enter the following settings<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
[[File:Win11f-internet-protocol-version-4-tcpip-properties.png|200px|none]]<br />
<br />
<br><br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
Many thanks to Piroz for providing the Windows 11 screen shots used above.<br />
<br />
=== Windows 10 stand alone network configuration ===<br />
<br />
Most computers running Windows 10 are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to use a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Slide your mouse to the bottom of the screen, and click inside the search box which is just to the right of the 4 pane blue window icon.<br />
* Type '''network''', then click on the Control Panel icon.<br />
[[File:Win10a-control-panel.png|600px|none]]<br />
<br><br />
* In the left panel of the ''Network and Sharing Center'', double-click on '''Change adapter settings'''.<br />
[[File:Win10b-network-and-sharing-center.png|600px|none]]<br />
<br><br />
* In the ''Network Connections'' window, double-click on '''Local Area Network'''.<br />
[[File:Win10c-network-connections.png|600px|none]]<br />
<br><br />
* In the "Local Area Network Connection Status" window, press the '''Properties''' button.<br />
[[File:Win10d-local-area-connecton-status.png|200px|none]]<br />
<br><br />
* In the "Local Area Connection Properties" window, select '''Internet Protocol version 4 (TCP/IPv4)''' and press the '''Properties''' button.<br />
[[File:Win10e-local-area-connection-properties.png|200px|none]]<br />
<br><br />
* Select '''Use the following IP address:''' and enter the following settings<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
[[File:Win10f-internet-protocol-version-4-tcpip-properties.png|200px|none]]<br />
<br><br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
=== Windows 8 stand alone network configuration ===<br />
<br />
Most computers running Windows 8 are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to use a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Slide your mouse to the upper right corner to bring up the ''charms bar'' and select '''Start'''.<br />
* Type '''network''' to bring up the Network browser window.<br />
* Select the '''Network''' tab and then the '''Properties''' icon.<br />
[[File:Win8-network-window-annotated.png|600px|none]]<br />
<br><br />
* In the left panel of the ''Network and Sharing Center'', select '''Change adapter settings'''.<br />
[[File:Win8-network-and-sharing-center-window-annotated.png|600px|none]]<br />
<br><br />
* In the ''Network Connections'' window, select '''Ethernet'''.<br />
[[File:Win8-network-connections-window-annotated.png|600px|none]]<br />
<br><br />
* In ''Ethernet Properties'', select '''Internet Protocol version 4 (TCP/IPv4)''' and press the '''Properties''' button.<br />
[[File:Win8-ethernet-properties-dialog-annotated.png|300px|none]]<br />
<br><br />
* Select '''Use the following IP address:''' and enter the following settings<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
[[File:Win8-internet-version-4-tcp-ipv4-properties-dialog-annotated.png|300px|none]]<br />
<br><br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
<br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
=== Windows 7 stand alone network configuration ===<br />
<br />
Most computers running Windows 7 are configured to allow Ethernet to work automatically over an Internet connection. You need to change the configuration to used a fixed IP address when your laptop and camera are connected together using an Ethernet cable. <br />
<br />
Open the Control Panel network settings dialog and adjust the Ethernet network settings.<br />
<br />
* Click on the Start icon in the lower left corner.<br />
<br />
[[File:Win-7-network-start-button.png]] <br />
<br />
* Click on Control Panel on the right side.<br />
<br />
[[File:Win-7-network-control-panel-select.png|300px]]<br />
<br />
* Click on Network and Internet.<br />
<br />
[[File:Win-7-network-control-panel.png|500px]]<br />
<br />
* Click on Network and Sharing Center.<br />
<br />
[[File:Win-7-network-control-panel-network-and-internet.png|500px]]<br />
<br />
* Click on Change adapter settings.<br />
<br />
[[File:Win-7-network-control-panel-network-and-sharing.png|500px]]<br />
<br />
* Click on Local Area Connection.<br />
<br />
[[File:Win-7-network-control-panel-adapter-settings.png|500px]]<br />
<br />
* Click on Properties.<br />
<br />
[[File:Win-7-network-control-panel-local-area-network.png|500px]]<br />
<br />
* Click on Internet Protocol Version 4 (TCP/IPv4).<br />
<br />
[[File:Win-7-network-control-panel-network-adapter-properties.png|500px]]<br />
<br />
*Set the following values:<br />
<br />
** IP address: 10.11.12.1<br />
** Subnet mask: 255.255.255.0<br />
** Default gateway: ''leave blank''<br />
<br />
[[File:Win-7-network-control-panel-network-adapter-tcpv4-settings.png|500px]]<br />
<br />
<br />
* Select '''OK''' in the ''Internet Protocol version 4 (TCP/IPv4)'' dialog and '''Close''' in the ''Ethernet Properties'' dialog.<br />
<br />
You can now browse to your camera using Chrome and the URL<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13<br />
</pre><br />
<br />
To revert back to the previous Ethernet configuration (so you can connect to the Internet), follow the same steps, but select '''Obtain an IP address automatically''' in the ''Internet Protocol Version 4 (TCP/IPv4)'' dialog.<br />
<br />
=== Android stand alone Ethernet network configuration ===<br />
<br />
A customer asked if an Ethernet dongle to a Samsung Galaxy Android tablet would work. "It should" was my reply, along with finding and buying the [https://www.amazon.com/Plugable-Ethernet-Compatible-Raspberry-AX88772A/dp/B00RM3KXAU Plugable USB 2.0 OTG Micro-B Ethernet Adaptor] for around $14 so I could run a quick test. Years ago I met Bernie, the owner of Plugable at an embedded Linux conference. I became a fan as Plugable cares about Linux support and we all know Android runs on top of Linux. We all know the edgertronic camera runs Linux too. My office is full of Plugable gear.<br />
<br />
I had it working in 10 minutes. Here are the steps I followed:<br />
<br />
# attach Plugable Ethernet adaptor to tablets USB OTG connector<br />
# click on Settings and then Connections<br />
# click on More connection settings<br />
# click on Ethernet<br />
# click on Configure Ethernet device<br />
## adjust the settings as shown below (click on image to make it bigger), The DNS address and Default routers settings are not used, but you must set some value or you can not save the configuration.<br />
# Launch Chrome and browse to 10.11.12.13<br />
{|<br />
| [[File:Android-settings.png|250px]]<br />
| [[File:Android-settings-more-connections.png|250px]]<br />
| [[File:Android-settings-configure-ethernet.png|250px]]<br />
|-<br />
| [[File:Android-settings-ethernet-settings.png|250px]]<br />
| [[File:Android-browser.png|250px]]<br />
|}<br />
<br />
== Changing the camera's IP address ==<br />
<br />
For stand alone operations, the camera uses a default IP address:<br />
<br />
<pre style="background:#d6e4f1"><br />
10.11.12.13<br />
</pre><br />
<br />
'''Changing the fixed IP address is an experimental camera feature.''' If you make a mistake [[Multi-function button|reset the camera to factory default values]].<br />
<br />
=== Software release 2.5.2 and newer ===<br />
<br />
See [[Ethernet_networking#User_configured_network_settings|user configured network settings]] above.<br />
<br />
=== Software release 2.4.1 and newer ===<br />
<br />
If the interfaces file has an address other than 10.11.12.13, then the DHCP protocol support is disabled so you can use a fixed IP address on a network with a DHCP server.<br />
<br />
=== Software release 2.2 and newer ===<br />
<br />
It is possible to change the camera's fixed IP address by saving a file named '''interfaces''' in the root directory of the SD card and rebooting the camera TWICE. <br />
<br />
If you have two cameras, store the [http://www.edgertronic.com/releases/interfaces/interfaces.10.11.12.14 interfaces.10.11.12.14 file] on the SD card on the second camera. It will have IP address 10.11.12.14.<br />
<br />
<pre style="background:#d6e4f1"><br />
#enable-dhcp no<br />
<br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto eth0<br />
iface eth0 inet static<br />
address 10.11.12.14<br />
netmask 255.255.255.0<br />
network 10.11.12.0<br />
broadcast 10.11.12.255<br />
</pre><br />
<br />
If you have three cameras, store the following contents into the interfaces file on the SD card on the third camera. It will have IP address 10.11.12.15.<br />
<br />
<pre style="background:#d6e4f1"><br />
#enable-dhcp no<br />
<br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto eth0<br />
iface eth0 inet static<br />
address 10.11.12.15<br />
netmask 255.255.255.0<br />
network 10.11.12.0<br />
broadcast 10.11.12.255<br />
</pre><br />
<br />
<br />
If you can no longer interact with the camera, [[Multi-function button|reset the camera to factory default values]].<br />
<br />
=== Software release 1.7 and older ===<br />
<br />
It is possible to change the camera's fixed IP address. Unfortunately, changing the fixed IP address can not be made via the web interface or by storing a configuration file on the SD card. You need to feel comfortable with command line tools like ''telnet'' and the text editor ''vi''. You also need to understand IP networking concepts such as the network mask. The change is stored in non-volatile memory so you only have to make the change once.<br />
<br />
The camera runs Linux and uses the standard <tt>/etc/network/interfaces</tt> file. The default contents of the interfaces file contains:<br />
<br />
<pre style="background:#d6e4f1"><br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto eth0<br />
iface eth0 inet static<br />
address 10.11.12.13<br />
netmask 255.255.255.0<br />
network 10.11.12.0<br />
broadcast 10.11.12.255<br />
</pre><br />
<br />
You adjust the eth0 settings to use a different fixed IP address. To make the change, you can telnet into the camera as user root (no password) and use the vi editor to modify the file.<br />
<br />
On your computer, bring up a command or terminal window.<br />
<br />
<pre style="background:#d6e4f1"><br />
telnet 10.11.12.13 # or the DHCP IP address <br />
vi /etc/network/interfaces<br />
</pre><br />
<br />
Make your changes, double check everything is correct, and save your changes. Then you can activate your changes<br />
<br />
<pre style="background:#d6e4f1"><br />
ifconfig eth0 down ; ifconfig eth0 up<br />
</pre><br />
<br />
As soon as the camera takes down the eth0 interface, telnet will close. You should then be able to telnet back into the camera using the fixed IP address you configured. If you can no longer interact with the camera, [[Multi-function button|reset the camera to factory default values]].<br />
<br />
= What happens when you plug in an Ethernet cable =<br />
<br />
You don't need to bother reading this. It is a detailed explanation written to remind me of the implementation and to help the camera testers understand what is going on.<br />
<br />
When the camera is booting up, the system LED blinks yellow, indicating the camera is not ready and has not received a DHCP assigned IP address.<br />
<br />
There are four cases to consider:<br />
<br />
== Other end of the cable is disconnected ==<br />
<br />
The definition of the Ethernet standard doesn't allow the camera to detect a cable is plugged in if the other end of the cable is unplugged or plugged into a network switch that is powered off. The short answer is nothing happens. The system LED blinks yellow / blue indicating no Ethernet connection detected.<br />
<br />
== No DHCP server available ==<br />
<br />
The original use case for the edgertronic camera was a laptop directly connected to the camera via an Ethernet cable. To make this as easy as possible, the camera uses a fixed IP address. The IP address is read from the <tt>/etc/network/interfaces file</tt>. The factory default fixed IP address is 10.11.12.13. The fixed IP address is used until a different IP address is provided by a DHCP server. When the camera is using a fixed IP address, the system LED is solid yellow, if the address is the default 10.11.12.13, or the system LED is magenta if a fixed IP address other than the default 10.11.12.13 is being used.<br />
<br />
== DHCP server available ==<br />
<br />
As stated above, the camera assumes it will be using a fixed IP address (and thus the system LED starts out blinking yellow and then typically solid yellow or solid magenta). Once the camera's networking subsystem is alive (around 44 seconds after power on), the DHCP client in the camera requests an IP address. If no DHCP server is available, then the camera continues to use a fixed IP address (and the system LED stays yellow or magenta). If a DHCP server responds to the camera's request for an IP address, then the camera stops using the fixed IP address and switches to the dynamically assigned IP address. The system LED changes to solid blue. The camera creates a file on the SD card where the filename contains the dynamically assigned IP address.<br />
<br />
== Ethernet cable unplugged and plugged back in again ==<br />
<br />
The networking system is defined to handle transient errors, such as a cable being bumped or temporarily rerouted and then plugged back in again. For the camera, this means if you unplug the Ethernet cable and plug in back in again in under 7 seconds, nothing happens. The network protocols used by the camera use a reliable protocol (TCP), so any packets lost while the cable was disconnected are resent. There is one exception. The camera supports UDP multicast network trigger, which, if that packet is lost, means the camera will not trigger as expected. <br />
<br />
If the Ethernet cable is unplugged for more than 7 seconds, the system LED blinks yellow / blue to indicate no Ethernet connection detected, and the networking stack is reset, meaning the camera will revert to using the fixed IP address once a network cable is connected. For normal camera use cases, this seems an odd choice because if the camera was using a DHCP assigned address, shouldn't the camera continue to use that same IP address when the Ethernet cable is reconnected? The answer is no, to maintain compatibility with the networking standards. The reason is the camera can not tell which network it is now connected to. For normal camera use cases, it is always the same network. But the networking system is defined to allow you to disconnect from one network and connect the camera to another network without having to power cycle the camera.<br />
<br />
What this means is if you have a DHCP assigned network address, disconnect the Ethernet cable, wait until the system LED blinks yellow / blue, then reconnect the network cable, you will see the system LED go from blinking yellow / blue, to solid yellow (using a fixed IP address), then after around 5 seconds, the camera again gets a dynamic IP address from the DHCP server and thus the system LED goes solid blue. All modern DHCP servers will give the camera the same dynamic IP address.<br />
<br />
[[Category:Windows]] [[Category:Networking]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=NTP_Network_Time_Protocol&diff=5699NTP Network Time Protocol2024-02-14T23:33:05Z<p>Tfischer: </p>
<hr />
<div>= Camera Time =<br />
<br />
Normally the camera gets its time set via the web user interface. The host computer's current date and time is passed to the camera's battery powered hardware real time clock. After that, each time the camera powers on, the hardware real time is read to set the Linux operating system time.<br />
<br />
The camera time can also be set automatically, over the network, using NTP - Network Time Protocol. The NTP daemon, when configured, will regularly update the Linux wall clock. If the NTP server is located on the same local area network (subLAN), and several cameras are used, the difference in time of the Linux wall clocks in the cameras should be maximum of a few milliseconds.<br />
<br />
= NTP configuration =<br />
<br />
To enable NTP, you must provide the file [http://support.ntp.org/bin/view/Support/ConfiguringNTP '''ntp.conf'''] by saving the file in the root directory on either the SD card or a USB storage device. Power cycle the camera and the ntp.conf file will be stored in the camera's internal read-write file system (<tt>'''/mnt/rw/etc/ntp.conf'''</tt>). Power cycle the camera again and NTP will be enabled and using the configuration from the ntp.conf file. If you have several cameras to configure, create a <tt>keep-files</tt> file in the root directory of the SD card. Be sure to later delete the <tt>keep-files</tt> file if you use the SD card for storing videos.<br />
<br />
== Example ntp.conf file ==<br />
<br />
Change the server value to the NTP server of your choice. In the example below it is set to '''pool.ntp.org'''.<br />
<br />
<pre style="background:#d6e4f1"><br />
driftfile /mnt/rw/etc/ntp.drift<br />
statsdir /var/log/ntp_statistics<br />
<br />
# Specify one or more NTP servers.<br />
server pool.ntp.org<br />
</pre><br />
<br />
== Testing NTP configuration ==<br />
<br />
You can telnet into the camera and run<br />
<br />
<pre style="background:#d6e4f1"><br />
date 010100002001.30 ; hwclock -w<br />
</pre><br />
<br />
which will set the battery powered hardware real time clock to '''Mon Jan 1 00:00:30 UTC 2001'''. Power cycle the camera. Check the camera's date via the web interface or telnet into the camera and run<br />
<br />
<pre style="background:#d6e4f1"><br />
date<br />
</pre><br />
<br />
The current time and date should be shown. If not, check '''/var/log/messages''' or via the web interface <br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13/static/log/messages <br />
</pre><br />
replacing 10.11.12.13 with your camera's IP address as necessary.<br />
<br />
Power cycle the camera. When it reboots, verify the camera time is set correctly.<br />
<br />
= DNS configuration =<br />
<br />
If you use a computer name, such as <tt>'''pool.ntp.org'''</tt> when specifying the server in the <tt>'''ntp.conf'''</tt> file, then you need to make sure the camera's [[DNS Domain Name Services|'''DNS server configuration''']] will work in your network environment.<br />
<br />
[[Category:Networking]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Capture_a_video_by_triggering_the_camera&diff=5698Capture a video by triggering the camera2024-02-10T18:29:40Z<p>Tfischer: /* Trigger timing */</p>
<hr />
<div>There are many ways to trigger the camera, depending on your physical layout and the precision required:<br />
<br />
* [[Web user interface | Web UI]]<br />
* [[Capture a video by triggering the camera#External_trigger_connector | external trigger]]<br />
* [[Capture a video by triggering the camera#Multifunction_button | multi-function button]]<br />
* [[Capture a video by triggering the camera#Multicast_network_trigger|multicast network trigger]]<br />
* [[Capture a video by triggering the camera#CAMAPI trigger()| CAMAPI]]<br />
<br />
= External trigger connector =<br />
<br />
The external trigger connector (marked I/O) accepts a 2.5mm wired remote shutter release cable or a daisychain genlock cable. The tip is trigger and the sleeve is ground. Any high quality Canon compatible remote trigger should work.<br />
<br />
== Trigger remotes ==<br />
<br />
Examples of compatible trigger remotes that we have tested include:<br />
<br />
* Canon RS-60E3<br />
* Vello RS-C1II<br />
* Pixel RS-201<br />
<br />
NOTE! : These remote cables have a feature that allows the button to be locked down. In your excitement it is easy to activate the lock, leaving the button depressed when you think it is released. If you activate the button lock down, then the camera will not operate as expected.<br />
<br />
=== Wireless trigger remote ===<br />
<br />
==== Low cost trigger ====<br />
<br />
[[File:Edgertronic-with-wireless-remote.jpg|right|300px]]<br />
<br />
I quickly tested one Canon compatible wireless trigger as a customer was asking for a recommendation. I picked the wireless trigger based on my previous experience that a unit that uses standard AAA batteries is easier to keep running. I also bought a mix of tripod mount screws so I could attach the receiver to the edgertronic camera.<br />
<br />
* [https://www.amazon.com/gp/product/B01CJ5TV8K Pixel 2.4GHz Digital Wireless Remote Trigger]<br />
* [https://www.amazon.com/gp/product/B07BKPXR72 1/4" male to 1/4" male camera screw adapter] allowing the wireless trigger receiver to be mounted to the camera using the right side camera tripod mount.<br />
* [https://www.amazon.com/AmazonBasics-Performance-Alkaline-Batteries-Count/dp/B00LH3DMUO 4 AAA batteries]<br />
<br />
As expected, the wireless remote worked fine. I setup the camera outside and walked down the block. I was able to trigger around 60 meters from the camera, not testing any larger distance. All of this was very simple testing, not any kind of reliability or suitability testing.<br />
<br />
What I liked about the Pixel 2.4GHz Digital Wireless Remote:<br />
<br />
* Easy to mount the remote to the camera (if you buy the extra screws listed above)<br />
* Uses standard batteries<br />
* Receiver blinks to indicate it is turned on<br />
* Both transmitter and receiver provide visual feedback that a trigger occured<br />
* Nice form factor, easy to hold transmitter in your hand<br />
* Supports multiple codes so you can use several in close proximity without causing false triggers<br />
<br />
What could be better:<br />
<br />
* The slide switch on the transmitter, which allows selecting between single trigger, multi-trigger, and delayed trigger, is very flimsy. I would likely take a hot glue gun and fix the slider to the single trigger setting.<br />
* Transmitter doesn't have an on/off switch. When put away, if anything is pressing against the button, the batteries will go dead.<br />
<br />
==== Long range wireless trigger ====<br />
<br />
We haven't tried these personally, but a customer has and liked the performance.<br />
<br />
* [https://pocketwizard.com/radios PocketWizard remote wireless triggers]<br />
<br />
== External switch trigger ==<br />
<br />
An external switch trigger comes with the camera. The external switch trigger is very similar to the multi-function button, but with the ease of switch location. For example, when I was taking a high speed video while welding, I held the stick welder handle in one hand and the external switch trigger in the other.<br />
<br />
== Contact closure trigger ==<br />
<br />
Any contact closure will do to trigger the camera via the external trigger jack. Many of our customers have semi-permanent installations with a pair of wires daisy chained to the cameras to trigger them all simultaneously with a contact closure.<br />
<br />
== Trigger timing ==<br />
<br />
For software versions 1.0 through 1.3, there is a 5ms debounce built into the trigger input. In these releases, the delay from the trigger fall edge to the triggered frame is 5 ms plus one to three frames.<br />
<br />
For software version 2.0 and above, the trigger frame is the frame that occurs coincident or immediately after the trigger input fall edge or trigger event. When triggered, the delay from the trigger input to the start of the next frame is recorded as the trigger_to_exposure_delay in the [[metadata file]]. It is the delay from the trigger signal, after the 5ms debounce delay if enabled, to the start of exposure of the first frame following the trigger. This way you can know the phase between the trigger and the stream of frames. The delay will always be less than 1/framerate.<br />
<br />
For software versions 1.4 though 2.2.0 there was no debounce built into the trigger input.<br />
<br />
For software versions 2.2.1 and above there is a user settable debounce setting. If debounce is on, a 5 ms debounce delay occurs between the falling trigger edge and when the camera processes the falling edge. With debounce off, there is no delay and the falling edge is processed immediately. Click on the wrench in the button box, select the Preferences tab, and turn trigger debounce on or off depending on your application.<br />
<br />
When triggered, the delay from the trigger input to the start of the next frame is recorded as the '''trigger_to_exposure_delay''' in the [[metadata file]]. It is the delay from the trigger signal, after the 5ms debounce delay if enabled, to the start of exposure of the first frame following the trigger. The trigger_to_exposure_delay can be thought of as the phase between the trigger and the stream of frames. The delay will always be less than 1/framerate.<br />
<br />
== Daisychain genlock ==<br />
<br />
If you have two or more cameras, you can configure the cameras to use [[Genlock|genlock]]. Special cabling is required to interconnect the genlock source camera to the genlock receiver cameras. For the 2 camera case, you can use the genlock cable that is included with each camera purchase. Use the genlock cable to attach to the external trigger connector on each of the 2 cameras. You can also use an accessory product, the [[Genlock Adapter]] for longer cabling runs and/or supporting more than 2 genlocked cameras.<br />
<br />
== Console serial port ==<br />
<br />
The external trigger connector can also be used in the SDK developer mode for the Linux [[SDK - Serial console|serial console]] port.<br />
<br />
== Signaling ==<br />
Trigger signaling depends on the Genlock mode selected. You can read about it in [[Genlock#Signaling | Genlock Signaling]]<br />
<br />
== Event trigger ==<br />
<br />
I have a [http://www.triggertrap.com/ TriggerTrap]<span style="color:#dd38da">[1]</span> I use with the edgertronic high speed camera. You can trigger based on an audio event, change in video, motion, time delay, and many other options. TriggerTrap is a cabling system that connects your smart phone to the camera. Sanstreak doesn't specifically support using Trigger Trap with an edgertronic camera, but I found it works great. I ordered the TriggerTrap MD3-E3 Mobile Dongle & E3 Remote Release Cable Kit for Canon. Unfortunately, TriggerTrap went out of business. Too bad, it was one of the best thought out camera trigger devices I have seen.<br />
<br />
== Canon compatible remote trigger ==<br />
<br />
The edgertronic camera supports 2.5mm remote trigger devices that are compatible with Canon cameras. I found around 200 hits when I searched Amazon for Canon camera remote triggers. Of course we haven't tested all of them (okay, we haven't tested any of them), but the advantage of supports a standard like the Canon 2.5 mm remote trigger jack is generally they work fine.<br />
<br />
Another quick search for long range wireless camera trigger turned up [http://www.pocketwizard.com/products/transmitter_receiver/plus%20iii/ PocketWizard] specs indicate up to 500 meters. Call PocketWizard to make sure you get the right remote camera trigger cable that is compatible with a Canon camera.<br />
<br />
== Genlock trigger ==<br />
<br />
At this time you can not use an physical external trigger (or switch closure) when the cameras wired for genlock.<br />
<br />
= Multifunction button =<br />
<br />
You can simply power on a pre-configured camera and press the multi-function button to trigger the camera. The challenge is how to set the aperture and f/stop correctly. A typical use of the multi-function button would be to use a laptop or tablet for framing and proper focus and lighting, then while activating the experiment, use the multi-function button to trigger the camera.<br><br />
More info at [[Multi-function button]].<br />
<br />
= Multicast network trigger =<br />
<br />
The camera also supports being configured to trigger when a multicast packet is received over the network connection. This is a good solution if you need several camera to trigger at approximately the same time, but do not want to run wires between the cameras.<br><br />
More info at [[Multicast Network Trigger]]<br />
<br />
= CAMAPI trigger() =<br />
<br />
The Software Developer's Kit for the edgertronic camera, called CAMAPI, supports the trigger() API. CAMAPI is exposed over the network connection and the camera ships with host examples (written in python). You can easily access it via a web browser too:<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13/trigger<br />
</pre><br />
replacing 10.11.12.13 with your camera's IP address as necessary.<br />
<br />
Each time to refresh your browser when pointing to that URL, the camera will trigger.<br />
<br />
You can use the full CAMAPI for complete camera control.<br />
<br />
If you need to trigger your camera from a long distance, you can wireless Ethernet extenders or high quality WiFi access points. Then you can wireless trigger the camera using the built in Web UI.<br />
<br />
<br />
<br />
<span style="color:#dd38da">[1]</span>'''Triggertrap''' company went out of business in January 2017.<br />
<br />
[[Category:Getting Started Guides]][[Category:Features]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=NTP_Network_Time_Protocol&diff=5697NTP Network Time Protocol2024-02-08T18:21:01Z<p>Tfischer: Camera now generates ntp.conf file so no need to provide an example</p>
<hr />
<div>Experimental feature<br />
<br />
= Camera Time =<br />
<br />
Normally the camera gets its time set via the web user interface. The host computer's current date and time is passed to the camera's battery powered hardware real time clock. After that, each time the camera powers on, the hardware real time is read to set the Linux operating system time.<br />
<br />
The camera time can also be set automatically, over the network, using NTP - Network Time Protocol. The NTP daemon, when configured, will regularly update the Linux wall clock. If the NTP server is located on the same local area network (subLAN), and several cameras are used, the difference in time of the Linux wall clocks in the cameras should be maximum of a few milliseconds.<br />
<br />
= NTP configuration =<br />
<br />
To enable NTP, you must provide the file [http://support.ntp.org/bin/view/Support/ConfiguringNTP '''ntp.conf'''] by saving the file in the root directory on either the SD card or a USB storage device. Power cycle the camera and the ntp.conf file will be stored in the camera's internal read-write file system (<tt>'''/mnt/rw/etc/ntp.conf'''</tt>). Power cycle the camera again and NTP will be enabled and using the configuration from the ntp.conf file. If you have several cameras to configure, create a <tt>keep-files</tt> file in the root directory of the SD card. Be sure to later delete the <tt>keep-files</tt> file if you use the SD card for storing videos.<br />
<br />
== Example ntp.conf file ==<br />
<br />
Change the server value to the NTP server of your choice. In the example below it is set to '''pool.ntp.org'''.<br />
<br />
<pre style="background:#d6e4f1"><br />
driftfile /mnt/rw/etc/ntp.drift<br />
statsdir /var/log/ntp_statistics<br />
<br />
# Specify one or more NTP servers.<br />
server pool.ntp.org<br />
</pre><br />
<br />
== Testing NTP configuration ==<br />
<br />
You can telnet into the camera and run<br />
<br />
<pre style="background:#d6e4f1"><br />
date 010100002001.30 ; hwclock -w<br />
</pre><br />
<br />
which will set the battery powered hardware real time clock to '''Mon Jan 1 00:00:30 UTC 2001'''. Power cycle the camera. Check the camera's date via the web interface or telnet into the camera and run<br />
<br />
<pre style="background:#d6e4f1"><br />
date<br />
</pre><br />
<br />
The current time and date should be shown. If not, check '''/var/log/messages''' or via the web interface <br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13/static/log/messages <br />
</pre><br />
replacing 10.11.12.13 with your camera's IP address as necessary.<br />
<br />
Power cycle the camera. When it reboots, verify the camera time is set correctly.<br />
<br />
= DNS configuration =<br />
<br />
If you use a computer name, such as <tt>'''pool.ntp.org'''</tt> when specifying the server in the <tt>'''ntp.conf'''</tt> file, then you need to make sure the camera's [[DNS Domain Name Services|'''DNS server configuration''']] will work in your network environment.<br />
<br />
[[Category:Networking]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=NTP_Network_Time_Protocol&diff=5696NTP Network Time Protocol2024-02-08T18:18:56Z<p>Tfischer: force NTP drift file to read/write storage partition</p>
<hr />
<div>Experimental feature<br />
<br />
= Camera Time =<br />
<br />
Normally the camera gets its time set via the web user interface. The host computer's current date and time is passed to the camera's battery powered hardware real time clock. After that, each time the camera powers on, the hardware real time is read to set the Linux operating system time.<br />
<br />
The camera time can also be set automatically, over the network, using NTP - Network Time Protocol. The NTP daemon, when configured, will regularly update the Linux wall clock. If the NTP server is located on the same local area network (subLAN), and several cameras are used, the difference in time of the Linux wall clocks in the cameras should be maximum of a few milliseconds.<br />
<br />
= NTP configuration =<br />
<br />
To enable NTP, you must provide the file [http://support.ntp.org/bin/view/Support/ConfiguringNTP '''ntp.conf'''] by saving the file in the root directory on either the SD card or a USB storage device. Power cycle the camera and the ntp.conf file will be stored in the camera's internal read-write file system (<tt>'''/mnt/rw/etc/ntp.conf'''</tt>). Power cycle the camera again and NTP will be enabled and using the configuration from the ntp.conf file. If you have several cameras to configure, create a <tt>keep-files</tt> file in the root directory of the SD card. Be sure to later delete the <tt>keep-files</tt> file if you use the SD card for storing videos.<br />
<br />
== Example ntp.conf file ==<br />
<br />
Change the server value to the NTP server of your choice. In the example below it is set to '''pool.ntp.org'''.<br />
<br />
<pre style="background:#d6e4f1"><br />
driftfile /mnt/rw/etc/ntp.drift<br />
statsdir /var/log/ntp_statistics<br />
<br />
# Specify one or more NTP servers.<br />
server pool.ntp.org<br />
</pre><br />
<br />
You can find an example ntp.conf file in the camera's sdk directory tree at<br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13/static/sdk/ntp<br />
</pre><br />
replacing 10.11.12.13 with your camera's IP address as necessary.<br />
<br />
== Testing NTP configuration ==<br />
<br />
You can telnet into the camera and run<br />
<br />
<pre style="background:#d6e4f1"><br />
date 010100002001.30 ; hwclock -w<br />
</pre><br />
<br />
which will set the battery powered hardware real time clock to '''Mon Jan 1 00:00:30 UTC 2001'''. Power cycle the camera. Check the camera's date via the web interface or telnet into the camera and run<br />
<br />
<pre style="background:#d6e4f1"><br />
date<br />
</pre><br />
<br />
The current time and date should be shown. If not, check '''/var/log/messages''' or via the web interface <br />
<br />
<pre style="background:#d6e4f1"><br />
http://10.11.12.13/static/log/messages <br />
</pre><br />
replacing 10.11.12.13 with your camera's IP address as necessary.<br />
<br />
Power cycle the camera. When it reboots, verify the camera time is set correctly.<br />
<br />
= DNS configuration =<br />
<br />
If you use a computer name, such as <tt>'''pool.ntp.org'''</tt> when specifying the server in the <tt>'''ntp.conf'''</tt> file, then you need to make sure the camera's [[DNS Domain Name Services|'''DNS server configuration''']] will work in your network environment.<br />
<br />
[[Category:Networking]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Template:Genlock_Adapter&diff=5695Template:Genlock Adapter2024-02-02T01:06:41Z<p>Tfischer: /* Genlock Adapter */</p>
<hr />
<div>= Genlock Adapter =<br />
<br />
The Genlock Adapter allows you to capture frame synchronized videos where the camera are separated by more than 25 feet. In general, you need one Genlock Adapter per camera, unless you are routing the cabling through wiring closets, then you may need additional powered Genlock Adapters in each wiring closet.<br />
<br />
== Specs ==<br />
<br />
[[File:Genlock-adapter-front-source-receiver3.jpg|250px|thumb|Figure 1: Front panel]] [[File:Genlock-adapter-back-source-receiver3.jpg|250px|thumb|Figure 2: Back panel]]<br />
<br />
Dimensions (exclusive of cables) are 108.5mm x 87.5mm x 30.5mm<br />
<br />
== Ports ==<br />
<br />
The Genlock Adapter has the following ports:<br />
<br />
You must connect one, but not both of the following inputs:<br />
* IN - Genlock Input from an upstream Genlock Adapter. Connect using up to 1000' of Cat-5 or Cat-6 cable. '''This is NOT an Ethernet port. DO NOT connect to network hardware (POE Injectors, switches, routers etc.).'''<br />
* SOURCE IN - Genlock Input from genlock source camera. Set one camera to Genlock source mode and connect using the supplied 2.5mm male to male cable. As a special case, when a genlock source camera is connected to the SOURCE IN port, you can also connect a wired or wireless trigger to the TRIGGER IN port.<br />
<br />
One or more of the following Genlock outputs can be connected:<br />
* OUT1 - OUT4 - Genlock outputs to downstream Genlock Adapters. Connect using up to 1000' of Cat-5 or Cat-6 cable. This is NOT an Ethernet port. DO NOT connect to network hardware (POE Injectors, switches, routers etc.).<br />
* RECEIVER OUT / TRIGGER IN - Genlock Output to genlock receiver camera. Set the intended Receiver camera to Genlock Receiver mode and connect using the supplied 2.5mm male to male cable. As a special case, when a genlock source camera is connected to the SOURCE IN port, you can also connect a wired or wireless trigger to the TRIGGER IN port.<br />
<br />
The Genlock Adapter requires 5V @ 500mA<br />
* POWER - Connect one of the POWER ports to an edgertronic camera USB port, or an external USB charger, using the supplied USB male A to USB male A cable. You may use the second POWER port to daisy-chain power to a second Genlock Adapter box.<br />
<br />
== Wiring ==<br />
<br />
After you design your cabling layout, verify all of the following assertions are true:<br />
<br />
* Each camera has one Genlock Adapter<br />
* Each camera has one CAT5 cable used for Ethernet that goes to an Ethernet switch<br />
* Each genlock receiver camera has one CAT5 cable going from the camera's Genlock Adapter to another camera's Genlock Adapter<br />
* The genlock source camera has no more than four CAT 5 cables from genlock receiver camera's Genlock Adapters<br />
* Each Genlock Adapter that is not next to a camera has a USB power supply and a power source for the USB power supply<br />
* Maximum genlock CAT5 segment length is 1000 ft.<br />
* No network cable extenders being used for the genlock CAT5 wiring.<br />
<br />
=== Genlock source camera ===<br />
<br />
* Using a 2.5mm male/male cable, connect the IO port, on the back of the genlock source camera, to the SOURCE IN port on the front of the Genlock Adapter<br />
* Using a USB A to A cable, connect one of the USB ports, on the back of genlock source camera, to one of the POWER ports on the front of the Genlock Adapter<br />
* Using CAT5 or CAT6 cable(s), connect one or more the the OUT1-OUT4 ports, on the back of the Genlock Adapter, to the IN port(s) on the front of one or more downstream Genlock Adapters.<br />
* Using the supplied power adapter or POE splitter, connect 12V to the power jack on the back of the genlock source camera.<br />
* Connect ethernet to the back of the genlock source camera.<br />
* Optionally, connect a wired or wireless trigger to the TRIGGER IN port, on the front of the Genlock Adapter, to trigger the genlock source camera and all downstream receiver cameras.<br />
<br />
'''The Genlock adapter is NOT an ethernet network device. To prevent severe damage, DO NOT connect the Genlock Adapter to any ethernet network port (switches, routers, POE, servers, computers etc).'''<br />
<br />
=== Genlock receiver camera ===<br />
<br />
* Using a 2.5mm male/male cable, connect the IO port, on the back of the genlock receiver camera, to the RECEIVER OUT port on the front of the Genlock Adapter<br />
* Using a USB A to A cable, connect one of the USB ports, on the back of genlock receiver camera, to one of the POWER ports on the front of the Genlock Adapter<br />
* Using a CAT5 or CAT6 cable, connect the IN port, on the front of the Genlock Adapter, to one of the OUT1-OUT4 port(s) on the back of an upstream Genlock Adapter.<br />
* Optionally, using CAT5 or CAT6 cable(s), connect one or more the the OUT1-OUT4 ports, on the back of the Genlock Adapter, to the IN port(s) on the front of one or more downstream Genlock Adapters.<br />
* Using the supplied power adapter or POE splitter, connect 12V to the power jack on the back of the genlock receiver camera.<br />
* Connect ethernet to the back of the genlock receiver camera.<br />
<br />
'''The Genlock adapter is NOT an ethernet network device. To prevent severe damage, DO NOT connect the Genlock Adapter to any ethernet network port (switches, routers, POE, servers, computers etc).'''<br />
<br />
== Genlock Adapter connection examples ==<br />
<br />
The Genlock Adapter supports several different wiring deployments.<br />
<br />
The most important fact to keep in mind is not to confuse the cable – CAT5 – with the signaling protocol; ethernet signaling for camera control and genlock signaling for, well, genlock. CAT5 cabling with RJ45 connectors was chosen because it is inexpensive, well designed, and ubiquitous. However, using CAT5 cabling for two incompatible purposes can easily lead to confusion. CAT6 is a higher quality cable, and can be used in place of CAT5.<br />
<br />
The diagrams below are more readable if you click on them to expand their size.<br />
<br />
Diagram wire color list:<br />
<br />
{| border="1"<br />
! Color !! Example !! Usage<br />
|-<br />
| Medium Slate Blue<br />
| align="center" | <span style="background:#6666FF">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| Camera power, 12v 2.5A. Included with camera.<br />
|-<br />
| Laser Lemon<br />
| align="center" | <span style="background:#FFFF66">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| Ethernet network cable for camera configuration, monitoring and control. 2 meter cable included with camera.<br />
|-<br />
| Lime Green<br />
| align="center" | <span style="background:#99FF66">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| 2.5mm cable for connecting genlock source camera genlock signal to Genlock Adapter ''SOURCE IN'' port. 1 meter cable included with Genlock Adapter.<br />
|-<br />
| Violet<br />
| align="center" | <span style="background:#FF99FF">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| USB A to USB B cable for powering Genlock Adapter. 1 meter cable included with Genlock Adapter.<br />
|-<br />
| Scarlet Red<br />
| align="center" | <span style="background:#FF3300">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| CAT5 with RJ45 connectors for connecting two Genlock Adapters together. Maximum CAT5 length between two Genlock Adapters is 1.5 kilometers.<br />
|-<br />
| Dark Green<br />
| align="center" | <span style="background:#118842">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| 2.5mm cable for connecting Genlock Adapter ''RECEIVER OUT'' to genlock receiver camera. 1 meter cable included with Genlock Adapter.<br />
|}<br />
<br />
=== Genlock cable extender ===<br />
<br />
If you have two camera more than 25 feet apart, you can extend the distance by using two Genlock Adapters, as shown in the diagram.<br />
<br />
[[File:Genlock-adapter-diagram-extender-topology.svg|400px]]<br />
<br />
If the camera are more than 1000 ft apart you can put additional genlock adapters at least every 1000 ft. Power each Genlock Adapter, except the first and last, with a USB power supply.<br />
<br />
=== Star or home run topology ===<br />
<br />
If your cabling is going to follow typical small installation ethernet wiring, there will be one wiring closet that contains the ethernet network switch. This wiring topology is call star or alternately home run topology.<br />
<br />
[[File:Genlock-adapter-diagram-star-topology.svg|600px]]<br />
<br />
For example, if you have one genlock source camera and 3 genlock receiver cameras, the following equipment would be needed for star topology.<br />
<br />
* 4 cameras, one configured as genlock source camera and 3 cameras configured as genlock receiver.<br />
* 5 Genlock Adapters, one for each camera and one for the wiring closet. One USB power supply is needed for the Genlock Adapter in the closet.<br />
* 4 genlock CAT5 cables, going from the wiring closet to each camera.<br />
* 4 2.5mm cables that are included with the Genlock Adapter.<br />
* 5 USB-A to USB-A cables that are included with the Genlock Adapter.<br />
* 1 USB charger.<br />
* If you have a second wiring closet, with more cameras to connect, the pattern repeats. The figure show a second wiring closet with an additional Genlock Adapter and USB charger.<br />
<br />
=== Daisy chain topology ===<br />
<br />
One Genlock Adapter per camera, each camera having 3 CAT5 cables (1 Ethernet, 2 genlock), except for genlock source camera and the last receiver camera only needing 2 CAT5 cables (1 Ethernet, 1 genlock).<br />
<br />
[[File:Genlock-adapter-diagram-daisy-chain-topology.svg|600px]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=File:Genlock-adapter-back-source-receiver3.jpg&diff=5694File:Genlock-adapter-back-source-receiver3.jpg2024-02-02T01:06:19Z<p>Tfischer: </p>
<hr />
<div></div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Template:Genlock_Adapter&diff=5693Template:Genlock Adapter2024-02-02T01:05:57Z<p>Tfischer: /* Genlock Adapter */</p>
<hr />
<div>= Genlock Adapter =<br />
<br />
The Genlock Adapter allows you to capture frame synchronized videos where the camera are separated by more than 25 feet. In general, you need one Genlock Adapter per camera, unless you are routing the cabling through wiring closets, then you may need additional powered Genlock Adapters in each wiring closet.<br />
<br />
== Specs ==<br />
<br />
[[File:Genlock-adapter-front-source-receiver3.jpg|250px|thumb|Figure 1: Front panel]] [[File:Genlock-adapter-blue-rear-view-clean.png|250px|thumb|Figure 2: Back panel]]<br />
<br />
Dimensions (exclusive of cables) are 108.5mm x 87.5mm x 30.5mm<br />
<br />
== Ports ==<br />
<br />
The Genlock Adapter has the following ports:<br />
<br />
You must connect one, but not both of the following inputs:<br />
* IN - Genlock Input from an upstream Genlock Adapter. Connect using up to 1000' of Cat-5 or Cat-6 cable. '''This is NOT an Ethernet port. DO NOT connect to network hardware (POE Injectors, switches, routers etc.).'''<br />
* SOURCE IN - Genlock Input from genlock source camera. Set one camera to Genlock source mode and connect using the supplied 2.5mm male to male cable. As a special case, when a genlock source camera is connected to the SOURCE IN port, you can also connect a wired or wireless trigger to the TRIGGER IN port.<br />
<br />
One or more of the following Genlock outputs can be connected:<br />
* OUT1 - OUT4 - Genlock outputs to downstream Genlock Adapters. Connect using up to 1000' of Cat-5 or Cat-6 cable. This is NOT an Ethernet port. DO NOT connect to network hardware (POE Injectors, switches, routers etc.).<br />
* RECEIVER OUT / TRIGGER IN - Genlock Output to genlock receiver camera. Set the intended Receiver camera to Genlock Receiver mode and connect using the supplied 2.5mm male to male cable. As a special case, when a genlock source camera is connected to the SOURCE IN port, you can also connect a wired or wireless trigger to the TRIGGER IN port.<br />
<br />
The Genlock Adapter requires 5V @ 500mA<br />
* POWER - Connect one of the POWER ports to an edgertronic camera USB port, or an external USB charger, using the supplied USB male A to USB male A cable. You may use the second POWER port to daisy-chain power to a second Genlock Adapter box.<br />
<br />
== Wiring ==<br />
<br />
After you design your cabling layout, verify all of the following assertions are true:<br />
<br />
* Each camera has one Genlock Adapter<br />
* Each camera has one CAT5 cable used for Ethernet that goes to an Ethernet switch<br />
* Each genlock receiver camera has one CAT5 cable going from the camera's Genlock Adapter to another camera's Genlock Adapter<br />
* The genlock source camera has no more than four CAT 5 cables from genlock receiver camera's Genlock Adapters<br />
* Each Genlock Adapter that is not next to a camera has a USB power supply and a power source for the USB power supply<br />
* Maximum genlock CAT5 segment length is 1000 ft.<br />
* No network cable extenders being used for the genlock CAT5 wiring.<br />
<br />
=== Genlock source camera ===<br />
<br />
* Using a 2.5mm male/male cable, connect the IO port, on the back of the genlock source camera, to the SOURCE IN port on the front of the Genlock Adapter<br />
* Using a USB A to A cable, connect one of the USB ports, on the back of genlock source camera, to one of the POWER ports on the front of the Genlock Adapter<br />
* Using CAT5 or CAT6 cable(s), connect one or more the the OUT1-OUT4 ports, on the back of the Genlock Adapter, to the IN port(s) on the front of one or more downstream Genlock Adapters.<br />
* Using the supplied power adapter or POE splitter, connect 12V to the power jack on the back of the genlock source camera.<br />
* Connect ethernet to the back of the genlock source camera.<br />
* Optionally, connect a wired or wireless trigger to the TRIGGER IN port, on the front of the Genlock Adapter, to trigger the genlock source camera and all downstream receiver cameras.<br />
<br />
'''The Genlock adapter is NOT an ethernet network device. To prevent severe damage, DO NOT connect the Genlock Adapter to any ethernet network port (switches, routers, POE, servers, computers etc).'''<br />
<br />
=== Genlock receiver camera ===<br />
<br />
* Using a 2.5mm male/male cable, connect the IO port, on the back of the genlock receiver camera, to the RECEIVER OUT port on the front of the Genlock Adapter<br />
* Using a USB A to A cable, connect one of the USB ports, on the back of genlock receiver camera, to one of the POWER ports on the front of the Genlock Adapter<br />
* Using a CAT5 or CAT6 cable, connect the IN port, on the front of the Genlock Adapter, to one of the OUT1-OUT4 port(s) on the back of an upstream Genlock Adapter.<br />
* Optionally, using CAT5 or CAT6 cable(s), connect one or more the the OUT1-OUT4 ports, on the back of the Genlock Adapter, to the IN port(s) on the front of one or more downstream Genlock Adapters.<br />
* Using the supplied power adapter or POE splitter, connect 12V to the power jack on the back of the genlock receiver camera.<br />
* Connect ethernet to the back of the genlock receiver camera.<br />
<br />
'''The Genlock adapter is NOT an ethernet network device. To prevent severe damage, DO NOT connect the Genlock Adapter to any ethernet network port (switches, routers, POE, servers, computers etc).'''<br />
<br />
== Genlock Adapter connection examples ==<br />
<br />
The Genlock Adapter supports several different wiring deployments.<br />
<br />
The most important fact to keep in mind is not to confuse the cable – CAT5 – with the signaling protocol; ethernet signaling for camera control and genlock signaling for, well, genlock. CAT5 cabling with RJ45 connectors was chosen because it is inexpensive, well designed, and ubiquitous. However, using CAT5 cabling for two incompatible purposes can easily lead to confusion. CAT6 is a higher quality cable, and can be used in place of CAT5.<br />
<br />
The diagrams below are more readable if you click on them to expand their size.<br />
<br />
Diagram wire color list:<br />
<br />
{| border="1"<br />
! Color !! Example !! Usage<br />
|-<br />
| Medium Slate Blue<br />
| align="center" | <span style="background:#6666FF">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| Camera power, 12v 2.5A. Included with camera.<br />
|-<br />
| Laser Lemon<br />
| align="center" | <span style="background:#FFFF66">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| Ethernet network cable for camera configuration, monitoring and control. 2 meter cable included with camera.<br />
|-<br />
| Lime Green<br />
| align="center" | <span style="background:#99FF66">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| 2.5mm cable for connecting genlock source camera genlock signal to Genlock Adapter ''SOURCE IN'' port. 1 meter cable included with Genlock Adapter.<br />
|-<br />
| Violet<br />
| align="center" | <span style="background:#FF99FF">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| USB A to USB B cable for powering Genlock Adapter. 1 meter cable included with Genlock Adapter.<br />
|-<br />
| Scarlet Red<br />
| align="center" | <span style="background:#FF3300">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| CAT5 with RJ45 connectors for connecting two Genlock Adapters together. Maximum CAT5 length between two Genlock Adapters is 1.5 kilometers.<br />
|-<br />
| Dark Green<br />
| align="center" | <span style="background:#118842">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><br />
| 2.5mm cable for connecting Genlock Adapter ''RECEIVER OUT'' to genlock receiver camera. 1 meter cable included with Genlock Adapter.<br />
|}<br />
<br />
=== Genlock cable extender ===<br />
<br />
If you have two camera more than 25 feet apart, you can extend the distance by using two Genlock Adapters, as shown in the diagram.<br />
<br />
[[File:Genlock-adapter-diagram-extender-topology.svg|400px]]<br />
<br />
If the camera are more than 1000 ft apart you can put additional genlock adapters at least every 1000 ft. Power each Genlock Adapter, except the first and last, with a USB power supply.<br />
<br />
=== Star or home run topology ===<br />
<br />
If your cabling is going to follow typical small installation ethernet wiring, there will be one wiring closet that contains the ethernet network switch. This wiring topology is call star or alternately home run topology.<br />
<br />
[[File:Genlock-adapter-diagram-star-topology.svg|600px]]<br />
<br />
For example, if you have one genlock source camera and 3 genlock receiver cameras, the following equipment would be needed for star topology.<br />
<br />
* 4 cameras, one configured as genlock source camera and 3 cameras configured as genlock receiver.<br />
* 5 Genlock Adapters, one for each camera and one for the wiring closet. One USB power supply is needed for the Genlock Adapter in the closet.<br />
* 4 genlock CAT5 cables, going from the wiring closet to each camera.<br />
* 4 2.5mm cables that are included with the Genlock Adapter.<br />
* 5 USB-A to USB-A cables that are included with the Genlock Adapter.<br />
* 1 USB charger.<br />
* If you have a second wiring closet, with more cameras to connect, the pattern repeats. The figure show a second wiring closet with an additional Genlock Adapter and USB charger.<br />
<br />
=== Daisy chain topology ===<br />
<br />
One Genlock Adapter per camera, each camera having 3 CAT5 cables (1 Ethernet, 2 genlock), except for genlock source camera and the last receiver camera only needing 2 CAT5 cables (1 Ethernet, 1 genlock).<br />
<br />
[[File:Genlock-adapter-diagram-daisy-chain-topology.svg|600px]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=File:Genlock-adapter-front-source-receiver3.jpg&diff=5692File:Genlock-adapter-front-source-receiver3.jpg2024-02-02T01:04:29Z<p>Tfischer: </p>
<hr />
<div></div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Trigger_Latency_Compensation&diff=5691Trigger Latency Compensation2024-01-26T20:27:32Z<p>Tfischer: /* Latency compensation */</p>
<hr />
<div>There is always a latency, a time delay, between when an event occurs, when the event is detected as occurring, and more latency until the camera is triggered to capture the event. For some environments, the latency is fixed for all captures. For other environments, the latency changes from one capture to the next capture.<br />
<br />
Trigger latency example, if you are using a radar to trigger the camera, there is a delay in measuring the speed of an object, such as a baseball, that is used to trigger the camera when the speed exceeds a threshold. <br />
<br />
= Trigger event =<br />
<br />
The camera uses the trigger event for three purposes:<br />
<br />
# Switching from capturing pre-trigger frames to capturing post-trigger frames.<br />
# Assigning frame 0 to the trigger frame, and numbering the other captured frames accordingly.<br />
# Capturing a time-of-day trigger time to store in the metadata file.<br />
<br />
= Latency compensation limitations =<br />
<br />
The following cannot be modified as part of dynamic latency compensation:<br />
<br />
* The size of the capture buffer reserved for a capture,<br />
* The number of frames in the pre-trigger portion of the buffer, and <br />
* the point in time the trigger was received by the camera.<br />
<br />
What can be modified is the captured frame that is considered the trigger frame; frame 0, and the time-of-day the trigger occurred.<br />
<br />
The obvious limitation in latency compensation is the size of the pre-trigger portion of the capture buffer needs to be sized for the desired pre-trigger time plus the worse case (longest) latency time. This is necessary because the capture buffer size and the portion for pre-trigger frames can not be changed after frame capture has started.<br />
<br />
= Specifying latency =<br />
<br />
There are 3 possible ways to specify latency:<br />
<br />
# If the latency is fixed and measurable, it might be convient to think of latency it terms of a time difference.<br />
# If the latency is dynamic, then it might be convenient to capture the actual time the event occurred. Latency is the difference between camera trigger time and the event time.<br />
# If you have a warped viewpoint, such as the camera software engineering writing this wiki, you could think of latency in terms of frame count. This of course is just the latency time times the frame rate. The frame count is used to move what is considered the trigger frame, frame 0.<br />
<br />
= Latency compensation =<br />
<br />
In all approaches to latency compensation, the size of the pre-trigger portion of the capture buffer needs to be sized for the desired pre-trigger time plus the worse case (longest) latency time.<br />
<br />
== Post video save latency compensation ==<br />
<br />
Once the event is over and all the data has been captured, latency compensation is done <br />
<br />
<br />
== Post capture latency compensation ==<br />
<br />
= Old version of the page =<br />
<br />
<br />
== Background ==<br />
<br />
You can compensate for a latency (delay) between when the event of interest occurs and a later point in time when the camera is triggered, if you know the length of the delay. <br />
<br />
Since the edgertronic camera supports a pretrigger buffer capturing frames before the trigger occurs, the camera creates a video of all the event details. However, a side effect of trigger latency is<br />
<br />
<br />
== Terminology ==<br />
<br />
* Event - the time during the action that indicates the action is of interest and should be captured<br />
* Trigger - the time at which the camera is triggered<br />
* Latency - the delay between the event and the trigger<br />
<br />
== Example ==<br />
<br />
Assume you trigger using a radar that has 700ms of latency. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
<br />
This feature has not been implemented. We added [[Captured video queue control|manual save mode]] to accomplish the same functionality.<br />
<br />
<br />
<br />
Trigger latency compensation allows you to discard (trim) frames from the capture so you are not forced to save a portion of video that is always uninteresting.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Trigger_Latency_Compensation&diff=5690Trigger Latency Compensation2024-01-26T20:17:24Z<p>Tfischer: </p>
<hr />
<div>There is always a latency, a time delay, between when an event occurs, when the event is detected as occurring, and more latency until the camera is triggered to capture the event. For some environments, the latency is fixed for all captures. For other environments, the latency changes from one capture to the next capture.<br />
<br />
Trigger latency example, if you are using a radar to trigger the camera, there is a delay in measuring the speed of an object, such as a baseball, that is used to trigger the camera when the speed exceeds a threshold. <br />
<br />
= Trigger event =<br />
<br />
The camera uses the trigger event for three purposes:<br />
<br />
# Switching from capturing pre-trigger frames to capturing post-trigger frames.<br />
# Assigning frame 0 to the trigger frame, and numbering the other captured frames accordingly.<br />
# Capturing a time-of-day trigger time to store in the metadata file.<br />
<br />
= Latency compensation limitations =<br />
<br />
The following cannot be modified as part of dynamic latency compensation:<br />
<br />
* The size of the capture buffer reserved for a capture,<br />
* The number of frames in the pre-trigger portion of the buffer, and <br />
* the point in time the trigger was received by the camera.<br />
<br />
What can be modified is the captured frame that is considered the trigger frame; frame 0, and the time-of-day the trigger occurred.<br />
<br />
The obvious limitation in latency compensation is the size of the pre-trigger portion of the capture buffer needs to be sized for the desired pre-trigger time plus the worse case (longest) latency time. This is necessary because the capture buffer size and the portion for pre-trigger frames can not be changed after frame capture has started.<br />
<br />
= Specifying latency =<br />
<br />
There are 3 possible ways to specify latency:<br />
<br />
# If the latency is fixed and measurable, it might be convient to think of latency it terms of a time difference.<br />
# If the latency is dynamic, then it might be convenient to capture the actual time the event occurred. Latency is the difference between camera trigger time and the event time.<br />
# If you have a warped viewpoint, such as the camera software engineering writing this wiki, you could think of latency in terms of frame count. This of course is just the latency time times the frame rate. The frame count is used to move what is considered the trigger frame, frame 0.<br />
<br />
= Latency compensation =<br />
<br />
== Post video save latency compensation ==<br />
<br />
<br />
== Post capture latency compensation ==<br />
<br />
<br />
<br />
<br />
= Old version of the page =<br />
<br />
<br />
== Background ==<br />
<br />
You can compensate for a latency (delay) between when the event of interest occurs and a later point in time when the camera is triggered, if you know the length of the delay. <br />
<br />
Since the edgertronic camera supports a pretrigger buffer capturing frames before the trigger occurs, the camera creates a video of all the event details. However, a side effect of trigger latency is<br />
<br />
<br />
== Terminology ==<br />
<br />
* Event - the time during the action that indicates the action is of interest and should be captured<br />
* Trigger - the time at which the camera is triggered<br />
* Latency - the delay between the event and the trigger<br />
<br />
== Example ==<br />
<br />
Assume you trigger using a radar that has 700ms of latency. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
<br />
This feature has not been implemented. We added [[Captured video queue control|manual save mode]] to accomplish the same functionality.<br />
<br />
<br />
<br />
Trigger latency compensation allows you to discard (trim) frames from the capture so you are not forced to save a portion of video that is always uninteresting.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Trigger_Latency_Compensation&diff=5689Trigger Latency Compensation2024-01-26T20:15:40Z<p>Tfischer: /* Latency Compensation Limitations */</p>
<hr />
<div>There is always a latency, a time delay, between when an event occurs, when the event is detected as occurring, and more latency until the camera is triggered to capture the event. For some environments, the latency is fixed for all captures. For other environments, the latency changes from one capture to the next capture.<br />
<br />
Trigger latency example, if you are using a radar to trigger the camera, there is a delay in measuring the speed of an object, such as a baseball, that is used to trigger the camera when the speed exceeds a threshold. <br />
<br />
= Trigger event =<br />
<br />
The camera uses the trigger event for three purposes:<br />
<br />
# Switching from capturing pre-trigger frames to capturing post-trigger frames.<br />
# Assigning frame 0 to the trigger frame, and numbering the other captured frames accordingly.<br />
# Capturing a time-of-day trigger time to store in the metadata file.<br />
<br />
= Latency compensation limitations =<br />
<br />
The following cannot be modified as part of dynamic latency compensation:<br />
<br />
* The size of the capture buffer reserved for a capture,<br />
* The number of frames in the pre-trigger portion of the buffer, and <br />
* the point in time the trigger was received by the camera.<br />
<br />
What can be modified is the captured frame that is considered the trigger frame; frame 0, and the time-of-day the trigger occurred.<br />
<br />
The obvious limitation in latency compensation is the size of the pre-trigger portion of the capture buffer needs to be sized for the desired pre-trigger time plus the worse case (longest) latency time. This is necessary because the capture buffer size and the portion for pre-trigger frames can not be changed after frame capture has started.<br />
<br />
= Specifying latency =<br />
<br />
There are 3 possible ways to specify latency:<br />
<br />
# If the latency is fixed and measurable, it might be convient to think of latency it terms of a time difference.<br />
# If the latency is dynamic, then it might be convenient to capture the actual time the event occurred. Latency is the difference between camera trigger time and the event time.<br />
# If you have a warped viewpoint, such as the camera software engineering writing this wiki, you could think of latency in terms of frame count. This of course is just the latency time times the frame rate. The frame count is used to move what is considered the trigger frame, frame 0.<br />
<br />
= Old version of the page =<br />
<br />
<br />
== Background ==<br />
<br />
You can compensate for a latency (delay) between when the event of interest occurs and a later point in time when the camera is triggered, if you know the length of the delay. <br />
<br />
Since the edgertronic camera supports a pretrigger buffer capturing frames before the trigger occurs, the camera creates a video of all the event details. However, a side effect of trigger latency is<br />
<br />
<br />
== Terminology ==<br />
<br />
* Event - the time during the action that indicates the action is of interest and should be captured<br />
* Trigger - the time at which the camera is triggered<br />
* Latency - the delay between the event and the trigger<br />
<br />
== Example ==<br />
<br />
Assume you trigger using a radar that has 700ms of latency. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
<br />
This feature has not been implemented. We added [[Captured video queue control|manual save mode]] to accomplish the same functionality.<br />
<br />
<br />
<br />
Trigger latency compensation allows you to discard (trim) frames from the capture so you are not forced to save a portion of video that is always uninteresting.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Trigger_Latency_Compensation&diff=5688Trigger Latency Compensation2024-01-26T20:15:07Z<p>Tfischer: /* Specifying latency */</p>
<hr />
<div>There is always a latency, a time delay, between when an event occurs, when the event is detected as occurring, and more latency until the camera is triggered to capture the event. For some environments, the latency is fixed for all captures. For other environments, the latency changes from one capture to the next capture.<br />
<br />
Trigger latency example, if you are using a radar to trigger the camera, there is a delay in measuring the speed of an object, such as a baseball, that is used to trigger the camera when the speed exceeds a threshold. <br />
<br />
= Trigger event =<br />
<br />
The camera uses the trigger event for three purposes:<br />
<br />
# Switching from capturing pre-trigger frames to capturing post-trigger frames.<br />
# Assigning frame 0 to the trigger frame, and numbering the other captured frames accordingly.<br />
# Capturing a time-of-day trigger time to store in the metadata file.<br />
<br />
= Latency Compensation Limitations =<br />
<br />
The following cannot be modified as part of dynamic latency compensation:<br />
<br />
* The size of the capture buffer reserved for a capture,<br />
* The number of frames in the pre-trigger portion of the buffer, and <br />
* the point in time the trigger was received by the camera.<br />
<br />
What can be modified is the captured frame that is considered the trigger frame; frame 0, and the time-of-day the trigger occurred.<br />
<br />
The obvious limitation in latency compensation is the size of the pre-trigger portion of the capture buffer needs to be sized for the desired pre-trigger time plus the worse case (longest) latency time. This is necessary because the capture buffer size and the portion for pre-trigger frames can not be changed after frame capture has started.<br />
<br />
= Specifying latency =<br />
<br />
There are 3 possible ways to specify latency:<br />
<br />
# If the latency is fixed and measurable, it might be convient to think of latency it terms of a time difference.<br />
# If the latency is dynamic, then it might be convenient to capture the actual time the event occurred. Latency is the difference between camera trigger time and the event time.<br />
# If you have a warped viewpoint, such as the camera software engineering writing this wiki, you could think of latency in terms of frame count. This of course is just the latency time times the frame rate. The frame count is used to move what is considered the trigger frame, frame 0.<br />
<br />
= Old version of the page =<br />
<br />
<br />
== Background ==<br />
<br />
You can compensate for a latency (delay) between when the event of interest occurs and a later point in time when the camera is triggered, if you know the length of the delay. <br />
<br />
Since the edgertronic camera supports a pretrigger buffer capturing frames before the trigger occurs, the camera creates a video of all the event details. However, a side effect of trigger latency is<br />
<br />
<br />
== Terminology ==<br />
<br />
* Event - the time during the action that indicates the action is of interest and should be captured<br />
* Trigger - the time at which the camera is triggered<br />
* Latency - the delay between the event and the trigger<br />
<br />
== Example ==<br />
<br />
Assume you trigger using a radar that has 700ms of latency. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
<br />
This feature has not been implemented. We added [[Captured video queue control|manual save mode]] to accomplish the same functionality.<br />
<br />
<br />
<br />
Trigger latency compensation allows you to discard (trim) frames from the capture so you are not forced to save a portion of video that is always uninteresting.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Trigger_Latency_Compensation&diff=5687Trigger Latency Compensation2024-01-26T20:13:29Z<p>Tfischer: </p>
<hr />
<div>There is always a latency, a time delay, between when an event occurs, when the event is detected as occurring, and more latency until the camera is triggered to capture the event. For some environments, the latency is fixed for all captures. For other environments, the latency changes from one capture to the next capture.<br />
<br />
Trigger latency example, if you are using a radar to trigger the camera, there is a delay in measuring the speed of an object, such as a baseball, that is used to trigger the camera when the speed exceeds a threshold. <br />
<br />
= Trigger event =<br />
<br />
The camera uses the trigger event for three purposes:<br />
<br />
# Switching from capturing pre-trigger frames to capturing post-trigger frames.<br />
# Assigning frame 0 to the trigger frame, and numbering the other captured frames accordingly.<br />
# Capturing a time-of-day trigger time to store in the metadata file.<br />
<br />
= Latency Compensation Limitations =<br />
<br />
The following cannot be modified as part of dynamic latency compensation:<br />
<br />
* The size of the capture buffer reserved for a capture,<br />
* The number of frames in the pre-trigger portion of the buffer, and <br />
* the point in time the trigger was received by the camera.<br />
<br />
What can be modified is the captured frame that is considered the trigger frame; frame 0, and the time-of-day the trigger occurred.<br />
<br />
The obvious limitation in latency compensation is the size of the pre-trigger portion of the capture buffer needs to be sized for the desired pre-trigger time plus the worse case (longest) latency time. This is necessary because the capture buffer size and the portion for pre-trigger frames can not be changed after frame capture has started.<br />
<br />
= Specifying latency =<br />
<br />
There are 3 possible ways to specify latency:<br />
<br />
# If the latency is fixed and measurable, it might be convient to think of latency it terms of a time difference.<br />
# If the latency is dynamic, then it might be convenient to capture the actual time event occurred. Latency is the difference between camera trigger time and the event time.<br />
# If you have a warped viewpoint, such as the camera software engineering writing this wiki, you could think of latency in terms of frame count. This of course is just the latency time times the frame rate. The frame count is used to move what is considered the trigger frame, frame 0.<br />
<br />
<br />
<br />
= Old version of the page =<br />
<br />
<br />
== Background ==<br />
<br />
You can compensate for a latency (delay) between when the event of interest occurs and a later point in time when the camera is triggered, if you know the length of the delay. <br />
<br />
Since the edgertronic camera supports a pretrigger buffer capturing frames before the trigger occurs, the camera creates a video of all the event details. However, a side effect of trigger latency is<br />
<br />
<br />
== Terminology ==<br />
<br />
* Event - the time during the action that indicates the action is of interest and should be captured<br />
* Trigger - the time at which the camera is triggered<br />
* Latency - the delay between the event and the trigger<br />
<br />
== Example ==<br />
<br />
Assume you trigger using a radar that has 700ms of latency. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
<br />
This feature has not been implemented. We added [[Captured video queue control|manual save mode]] to accomplish the same functionality.<br />
<br />
<br />
<br />
Trigger latency compensation allows you to discard (trim) frames from the capture so you are not forced to save a portion of video that is always uninteresting.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Trigger_Latency_Compensation&diff=5686Trigger Latency Compensation2024-01-26T20:10:53Z<p>Tfischer: </p>
<hr />
<div>There is always a latency, a time delay, between when an event occurs, when the event is detected as occurring, and when the camera is triggered to capture the event.<br />
<br />
For example, if you are using a radar to trigger the camera, there is a delay in measuring the speed of an object, such as a baseball, that is used to trigger the camera when the speed exceeds a threshold. <br />
<br />
The camera uses the trigger event for three purposes:<br />
<br />
# Switching from capturing pre-trigger frames to capturing post-trigger frames.<br />
# Assigning frame 0 to the trigger frame, and numbering the other captured frames accordingly.<br />
# Capturing a time-of-day trigger time to store in the metadata file.<br />
<br />
= Latency Compensation Limitations =<br />
<br />
The following cannot be modified as part of dynamic latency compensation:<br />
<br />
* The size of the capture buffer reserved for a capture,<br />
* The number of frames in the pre-trigger portion of the buffer, and <br />
* the point in time the trigger was received by the camera.<br />
<br />
What can be modified is the captured frame that is considered the trigger frame; frame 0, and the time-of-day the trigger occurred.<br />
<br />
The obvious limitation in latency compensation is the size of the pre-trigger portion of the capture buffer needs to be sized for the desired pre-trigger time plus the worse case (longest) latency time. This is necessary because the capture buffer size and the portion for pre-trigger frames can not be changed after frame capture has started.<br />
<br />
= Specifying latency =<br />
<br />
There are 3 possible ways to specify latency:<br />
<br />
# If the latency is fixed and measurable, it might be convient to think of latency it terms of a time difference.<br />
# If the latency is dynamic, then it might be convenient to capture the actual time event occurred. Latency is the difference between camera trigger time and the event time.<br />
# If you have a warped viewpoint, such as the camera software engineering writing this wiki, you could think of latency in terms of frame count. This of course is just the latency time times the frame rate. The frame count is used to move what is considered the trigger frame, frame 0.<br />
<br />
<br />
<br />
= Old version of the page =<br />
<br />
<br />
== Background ==<br />
<br />
You can compensate for a latency (delay) between when the event of interest occurs and a later point in time when the camera is triggered, if you know the length of the delay. <br />
<br />
Since the edgertronic camera supports a pretrigger buffer capturing frames before the trigger occurs, the camera creates a video of all the event details. However, a side effect of trigger latency is<br />
<br />
<br />
== Terminology ==<br />
<br />
* Event - the time during the action that indicates the action is of interest and should be captured<br />
* Trigger - the time at which the camera is triggered<br />
* Latency - the delay between the event and the trigger<br />
<br />
== Example ==<br />
<br />
Assume you trigger using a radar that has 700ms of latency. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
<br />
This feature has not been implemented. We added [[Captured video queue control|manual save mode]] to accomplish the same functionality.<br />
<br />
<br />
<br />
Trigger latency compensation allows you to discard (trim) frames from the capture so you are not forced to save a portion of video that is always uninteresting.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Trigger_Latency_Compensation&diff=5685Trigger Latency Compensation2024-01-26T19:48:51Z<p>Tfischer: </p>
<hr />
<div>There is always a latency, a time delay, between when an event occurs, when the event is detected as occurring, and when the camera is triggered to capture the event.<br />
<br />
For example, if you are using a radar to trigger the camera, there is a delay in measuring the speed of an object, such as a baseball, that is used to trigger the camera when the speed exceeds a threshold. <br />
<br />
The camera uses the trigger event for three purposes:<br />
<br />
# Switching from capturing pre-trigger frames to capturing post-trigger frames.<br />
# Assigning frame 0 to the trigger frame, and numbering the other captured frames accordingly.<br />
# Capturing a time-of-day trigger time to store in the metadata file.<br />
<br />
<br />
= Simplistic Latency Compensation =<br />
<br />
<br />
<br />
<br />
= Background =<br />
<br />
You can compensate for a latency (delay) between when the event of interest occurs and a later point in time when the camera is triggered, if you know the length of the delay. <br />
<br />
Since the edgertronic camera supports a pretrigger buffer capturing frames before the trigger occurs, the camera creates a video of all the event details. However, a side effect of trigger latency is<br />
<br />
<br />
== Terminology ==<br />
<br />
* Event - the time during the action that indicates the action is of interest and should be captured<br />
* Trigger - the time at which the camera is triggered<br />
* Latency - the delay between the event and the trigger<br />
<br />
== Example ==<br />
<br />
Assume you trigger using a radar that has 700ms of latency. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
<br />
= Old version of the page =<br />
<br />
This feature has not been implemented. We added [[Captured video queue control|manual save mode]] to accomplish the same functionality.<br />
<br />
<br />
<br />
Trigger latency compensation allows you to discard (trim) frames from the capture so you are not forced to save a portion of video that is always uninteresting.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Edgertronic_multi-camera_synchronization_genlock&diff=5684Edgertronic multi-camera synchronization genlock2024-01-26T19:19:53Z<p>Tfischer: /* File naming */</p>
<hr />
<div>Note: Genlock terminology changed in software release v2.5.2. The change was done in a way that is backwards compatible for controlling applications that use CAMAPI.<br />
<br />
= Overview =<br />
<br />
[[File:Genlock circled.png|400px|thumb|right|Genlock settings]]<br />
<br />
Genlock is a mode where a source camera provides frame start and trigger signals to one or more receiver cameras. The source and receiver cameras capture frames at the at the exact same rate, with the start of exposure synchronized to within +/- 1 uS of each other. Trigger events are processed by the source and distributed to the receivers in manner that causes the source and receiver camera to trigger on the exact same frame.<br />
<br />
For each frame that is captured, the cameras go through three phases: exposure, frame data readout and idle. As long as the receiver camera has a positive idle time the receiver will stay in genlock with the source. During the exposure phase and frame data readout phase a receiver camera will ignore start-of-exposure signals from the source.<br />
<br />
There are limitations:<br />
<br />
* All cameras have to be configured for compatible timing (see below). For one-to-one frame lock, each receiver has to be set to a frame rate that is greater than or equal to the source's frame rate.<br />
* The user is responsible to make sure all cameras can be triggered before issuing a trigger. There is no automated check that verifies whether or not a receiver camera has completed saving the previously captured video, finished the calibrate operation, has available storage, etc.<br />
<br />
'''All cameras must have identical settings except for Genlock Mode, Sensitivity and Shutter'''. The Shutter value must not limit frame rate. You are responsible for configuring each camera; there is no configuration information flowing over the genlock cable. Incorrect camera configuration will lead to unusable videos.<br />
<br />
Should you wish to violate the above and have different settings, you must always guarantee the following:<br />
# What ever the settings (exposure, resolution etc) the receiver must be able to keep up with the source. Just because you set the source to 1000 fps doesn't magically make an SC1 set to 1280x1024 able to run faster that 494 fps.<br />
# The receiver must not be set for more post-trigger frames than the source. The source camera only sends genlock timing pulses when it is filling the pre and post trigger buffers. When the source post-trigger buffer is filled, the source will stop sending pulses until it is ready for the next capture. Until then, if a genlock receiver still wants to capture more post-trigger frames, it will loose genlock sync.<br />
<br />
= Cabling =<br />
<br />
[[File:Edgertronic-2-cameras-with-genlock-cable.jpeg|300px|right]]<br />
<br />
The frame timing and trigger are sent on a single wire in the IO (external trigger) connector. In the simplest setup, with one source and one receiver camera, all that is needed is a 3 conductor, 2.5mm male to 2.5mm male, patch cord cable to connect the cameras. The cable was included with the camera. If you need another [https://www.amazon.com/TNP-2-5mm-Audio-Cable-6FT/dp/B0742MMXVW 3 conductor 2.5mm male to 2.5mm male patch cord cable] they are commonly available.<br />
<br />
You can also use an accessory product, the [[Genlock Adapter]] for longer cabling runs and/or supporting more than 2 genlocked cameras.<br />
<br />
= Initial configuration =<br />
<br />
The cameras need to have unique IP addresses. If you are not using DHCP, you can buy a simple 4-port network switch (such as a [https://www.amazon.com/D-Link-Unmanaged-Metal-Desktop-Switch/dp/B007DHWPR4 D-Link DES-105 5 Port 10/100 Network Switch] or the [https://www.amazon.com/D-Link-Gigabit-Unmanaged-Desktop-DGS-105/dp/B000BC7QMM D-Link DGS-105 5 Port Gigabit Network Switch]. For initial configuration, plug the PC and one receiver camera into the network switch. Leave the PC configured for network IP address 10.11.12.1 and the genlock source camera configured for network IP address 10.11.12.13. Each receiver camera needs a different, unique IP address, such as 10.11.12.14, 10.11.12.15, etc. Refer to the [[Ethernet networking|network setup]] instructions to see how to set each receiver camera's IP address. Remember you can only have one receiver connected at a time when setting the camera's IP address. Once all receiver cameras have been configured, plug all cameras into the network switch. There is no need to power off the cameras.<br />
<br />
Browse to the camera you chose to generate the genlock source; from the above instructions it will be at http://10.11.12.13. Configure the camera settings with genlock being configured as '''source'''. Then browse to each receiver camera and configure each one with identical settings, except of course set genlock to '''receiver'''. Once configured and properly cabled, look at the LEDs on all cameras to verify all cameras are in the run state (solid green camera LED). A blinking red camera LED indicates that the receiver camera isn't receiving a genlock signal.<br />
<br />
Once the cameras are wired, configured, and triggerable, go ahead and trigger the source camera to verify the receiver cameras are responding to the trigger. During capture, look at the camera LED on each camera and verify none of them are blinking white. A blinking red/white camera LED indicates that camera was not able to maintain genlock. Check each captured video to verify the results are what you expected.<br />
<br />
= Customize camera settings =<br />
<br />
Browse to each receiver camera and adjust the settings. There are setting limitations:<br />
<br />
* For one-to-one frame lock, each receiver has to be set to a frame rate greater than or equal to the source's frame rate.<br />
<br />
* The number of post-trigger frames (Frame_rate * Duration * (1 - Pretrigger_percentage/100) must be greater than the source's post-trigger frames. Once a source has filled it's post-trigger buffer, it will momentarily stop sending genlock signals and this will make the receivers unhappy if they still expect to capture more post-trigger frames.<br />
<br />
Once all the camera settings are adjusted, do another trial video capture to verify the timing is compatible. The [[metadata file]] will indicate if each receiver camera was able to maintain genlock.<br />
<br />
Additional technical details, if you are trying different settings on different cameras:<br />
<br />
* Post trigger duration of the source camera has to be as long as the longest post trigger duration of one of the receiver cameras.<br />
* Frame rate must be the same on all cameras<br />
* Resolution can be different<br />
<br />
= Metadata file =<br />
<br />
There are two genlock related settings in each captured video [[metadata file]].<br />
<br />
{| class="wikitable"<br />
! Key !! Value !! Meaning<br />
|-<br />
| Genlock || Off<br>Source<br>Receiver<br>External || Camera genlock setting.<br />
|-<br />
| Genlock locked || True<br>False<br>None || Indicates if the receiver camera was able to maintain genlock throughout the video capture. Set to ''NA'' if camera is not configured as a genlock receiver.<br />
|}<br />
<br />
There are several metadata file keys whose meaning can be effected when genlock is enabled:<br />
<br />
{| class="wikitable"<br />
! Key !! Meaning<br />
|-<br />
| Frame rate || The source is slowed down slightly to make sure it doesn’t over-run the receiver. All that is required is the source period must be greater than or equal to the min period that the receiver can run at given the receiver's allowed settings. This includes a little fudge factor for clock frequency error and jitter due to cabling.<br />
|-<br />
| Frame count || The receiver may capture more frames than the source, even with identical settings, because of the frame rate adjustment that ensures both cameras stay synchronized.<br />
|-<br />
| Trigger delay || For the receiver camera, trigger delay is meaningless. Trigger delay is reported correctly for when the camera is configured for external genlock.<br />
|}<br />
<br />
= Camera Settings =<br />
<br />
The genlock setting is stored on the camera so the value is used the next time you power on the camera. The value is also available in the metadata file. The possible Genlock values are described below:<br />
<br />
{| class="wikitable"<br />
! Key !! Value !! Meaning<br />
|-<br />
| rowspan="4" | Genlock || Off || Camera will respond to trigger events as normal and generate its own start-of-exposure timing signal.<br />
|-<br />
| Source || Camera will provide both the genlock trigger and genlock start-of-exposure signals on the [[Capture_a_video_by_triggering_the_camera#External_trigger_connector | External trigger connector]].<br />
|-<br />
| Receiver || Camera will get trigger and start-of-exposure from the [[Capture_a_video_by_triggering_the_camera#External_trigger_connector | External trigger connector]].<br />
|-<br />
| External || An external timing source is providing the start-of-exposure signal. The external trigger signal can be used to trigger the camera when configured for external genlock.<br />
|}<br />
<br />
= Genlock status reporting =<br />
<br />
Only the genlock receiver will report genlock status information, specifically genlock receiver timing error. If the receiver camera is in genlock, then normal camera status information is provided. Genlock status reporting is provided via the LEDs and CAMAPI <tt>get_camstatus()</tt> API.<br />
<br />
== LEDs ==<br />
<br />
When a receiver camera is unable to maintain genlock the camera LED will blink red/white. Once the receiver camera is maintaining genlock, the camera LED will stop blinking white after a five second timeout. Entering the calibrating or saving state will clear the blinking white camera LED.<br />
<br />
If the receiver camera doesn't detect any genlock signal, the camera LED will blink red.<br />
<br />
== Automation using CAMAPI ==<br />
<br />
The original design concept for genlock was the same for the camera itself - capturing critical videos to get insight into the physical world. The video capture process was anticipated to be through the web user interface and a simple trigger button, which is all that is needed to manually capture a handful of videos. Once the edgertronic camera was integrated into more complex workflows, the use model changed, and additional aspects of the camera's overall operation need to be taken into account.<br />
<br />
Although the cameras are in lock-step synchronization for triggering and capturing each frame, there are other aspects of the cameras that are operating asynchronously.<br />
<br />
* Local storage - an SD card may be full on one camera, but not on another camera.<br />
* SD card performance is difficult to predict. Even two SD cards in the same packaging bought at the same time might have noticeable different performance characteristics. A new SD card tends to be noticable faster until all the underlying NAND storage has been used once, then the needed erase cycles cause the performance to drop.<br />
* The time it takes a camera to finish the current capture and start putting pre-trigger frames in the next buffer can also be affected by the SD card performance.<br />
<br />
If the receiver camera is experiencing a genlock timing error, the CAMAPI <tt>get_camstatus()</tt> API returns a dictionary containing the '''flags''' keyword with <tt>CAMAPI_FLAG_RECEIVER_GENLOCK_ERROR</tt> flag (0x40000) bit is set. Once the receiver camera is maintaining genlock, the <tt>CAMAPI_FLAG_RECEIVER_GENLOCK_ERROR</tt> flag is cleared after a five second timeout. In addition, entering the calibrating or saving state will clear the <tt>CAMAPI_FLAG_RECEIVER_GENLOCK_ERROR</tt> flag.<br />
<br />
= Feature interaction =<br />
<br />
* If camera [[SDK - Serial console|Serial console]] is enabled, genlock setting is forced to ''Off''.<br />
* If camera is configured as a genlock receiver, camera will ignore all trigger local events (web UI trigger button, multi-function button, CAMAPI <tt>trigger()</tt> invocation, etc); expecting a genlock trigger over the [[Capture_a_video_by_triggering_the_camera#External_trigger_connector|External trigger connector]].<br />
* Each camera generates its own timing for the calibration cycle.<br />
* When configured as a receiver camera, the camera will only use the receiver camera's frame rate setting if the genlock start-of-exposure signal is not detected for 100 ms.<br />
<br />
=False Triggers=<br />
<br />
*Plugging in genlock cable may trigger both source and receiver cameras.<br />
*Unplugging genlock cable may trigger both source and receiver cameras.<br />
*Powering off a genlocked camera may trigger any other connected cameras.<br />
= Metadata file =<br />
<br />
The metadata file on the source camera will contain an entry indicating the delay from the incoming trigger and the start of the first frame following the trigger. The trigger delay value in the metadata file created by the receiver cameras is meaningless.<br />
<br />
= File naming =<br />
<br />
There are several CAMAPI mechanisms (<tt>trigger()</tt>, <tt>rename_last_video()</tt>, <tt>selective_save()</tt>) to allow you to name the file when you trigger the camera. How should filenaming be for handling for a group of genlocked cameras? <br />
<br />
Example: A data acquisition system includes multiple genlocked cameras to capture an event where the event, such as a baseball pitch. The data acquisition system creates a unique GUID identifier for each event. The need is to include the GUID in the filename to enable post event data correlation.<br />
<br />
The recommended design is to separate the camera trigger from assigning the filename. Under most conditions, sending the CAMAPI <tt>rename_last_video()</tt> using <tt>stage</tt> set to <tt>RENAME_STAGE_TRIGGERED</tt> right after triggering the genlocked cameras to each of the cameras in the genlock group.<br />
<br />
= Technical description =<br />
<br />
* For genlock to function properly, the receiver camera(s) must support a frame rate greater or equal to the source camera's output frame rate. The easiest way to ensure this is to use identical settings on all cameras. The source camera will decrease its maximum allowed frame rate slightly to allow for the slight clock differences that are possible between the source and receiver cameras.<br />
<br />
= Signaling =<br />
<br />
The camera can be triggered from three sources:<br />
<br />
* UI / API - trigger via CAMAPI <tt>trigger()</tt> API.<br />
* [[Multi-function button]] on back of camera<br />
* Tip of 2.5mm phone jack ([[Trigger#External_trigger_connector|I/O connector]])<br />
* If using a [[Genlock Adapter]] - tip of the 2.5mm phone jack labeled '''TRIGGER IN'''.<br />
<br />
The UI / CAMAPI <tt>trigger()</tt> API, Multi-function button, and Tip of phone jack are logically or-ed together and only the source camera responds to the trigger event.<br />
<br />
{| class="wikitable"<br />
! 2.5mm phone<br>jack signal !! Genlock<br>OFF mode !! Genlock<br>SOURCE mode !! Genlock<br>RECEIVER mode !! Genlock<br>EXTERNAL mode<br />
|-<br />
| align="center" | tip || align="center" | trigger input signal<br>(3.3V LVCMOS, active low) || align="center" | trigger input signal<br>(3.3V LVCMOS, active low) || align="center" | unused || align="center" | trigger input signal<br>(3.3V LVCMOS, active low)<br />
|-<br />
| align="center" | ring || align="center" | unused || align="center" | genlock output signal to receiver camera(s)<br>(combined frame start and trigger) || align="center" | frame start input signal<br>(3.3V LVCMOS, active low) || align="center" | frame start input signal<br>(3.3V LVCMOS, active low)<br />
|-<br />
| align="center" | sleeve || colspan="4" align="center" | ground<br />
|}<br />
<br />
You can use the cable listed below to genlock two cameras (which does not support external trigger via wired remote).<br />
<br />
* Steren 252-612 12' 2.5mm Male to 2.5mm Male<br />
* Generic [https://www.amazon.com/TNP-2-5mm-Audio-Cable-6FT/dp/B0742MMXVW 3 conductor 2.5mm male to 2.5mm male patch cord cable]<br />
<br />
We recommend if you are going to genlock more than 2 cameras, use [[Genlock Adapter|Genlock Adapters]]. If you are good with a soldering iron, you can make your own custom cable. To genlock 3 to 5 cameras, or to use an switch closure to trigger the source, you will need to make your own cable. To have an external trigger, you will need to make a custom cable. The signalling from the source genlock camera can drive up to 4 receiver genlock cameras. If you need to connect more than 4 receiver genlock cameras, then you will need to increase the drive by using an active cable with a 3.3V LVCMOS buffer. You can get 5v from the USB port, but will need to step it down to 3.3V before powering the VLCMOS buffer. If this doesn't make sense to you, then please use [[Genlock Adapter|Genlock Adapters]] to drive more than 4 receiver genlock cameras.<br />
<br />
<span style="color:purple"><br />
'''If you are making your own cable, note that the wired trigger we supply will ground both the tip and ring when the button is pressed. Connect the trigger's tip and sleeve to the source camera tip and sleeve respectively, and DO NOT connect the trigger ring to either camera. Look at the table above and you'll understand why.'''<br />
</span> <br><br />
<br />
<span style="color:purple"><br />
'''If you are having problems with your cabling, test out genlock using the supplied 12' 2.5mm male-male genlock camera between the source camera and one receiver camera. Setup the two cameras in the UI, and then trigger the source from the UI. Both cameras should be synced and trigger at the same instant. If you are having problems when using your custom cabling, please check your wiring.'''<br />
<br />
</span><br />
<br />
Electrically, the tip and ring are identical circuits, with the exception that the tip is always an input, while the ring is an output on the genlock source camera and an input on all genlock receiver cameras. Each signal has a 4.7K pullup to 3.3V, followed by a 165 Ohm series resistor. The other end of the 165 Ohm resistor connects to an ESD diode clamping array (GND and 3.3V), an FPGA 3.3V LVCMOS GPIO and an SOC LVCMOS 3.3V IO.<br />
<br />
{{Genlock Adapter}}<br />
<br />
[[Category:Features]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Edgertronic_multi-camera_synchronization_genlock&diff=5683Edgertronic multi-camera synchronization genlock2024-01-26T19:19:21Z<p>Tfischer: /* File naming */</p>
<hr />
<div>Note: Genlock terminology changed in software release v2.5.2. The change was done in a way that is backwards compatible for controlling applications that use CAMAPI.<br />
<br />
= Overview =<br />
<br />
[[File:Genlock circled.png|400px|thumb|right|Genlock settings]]<br />
<br />
Genlock is a mode where a source camera provides frame start and trigger signals to one or more receiver cameras. The source and receiver cameras capture frames at the at the exact same rate, with the start of exposure synchronized to within +/- 1 uS of each other. Trigger events are processed by the source and distributed to the receivers in manner that causes the source and receiver camera to trigger on the exact same frame.<br />
<br />
For each frame that is captured, the cameras go through three phases: exposure, frame data readout and idle. As long as the receiver camera has a positive idle time the receiver will stay in genlock with the source. During the exposure phase and frame data readout phase a receiver camera will ignore start-of-exposure signals from the source.<br />
<br />
There are limitations:<br />
<br />
* All cameras have to be configured for compatible timing (see below). For one-to-one frame lock, each receiver has to be set to a frame rate that is greater than or equal to the source's frame rate.<br />
* The user is responsible to make sure all cameras can be triggered before issuing a trigger. There is no automated check that verifies whether or not a receiver camera has completed saving the previously captured video, finished the calibrate operation, has available storage, etc.<br />
<br />
'''All cameras must have identical settings except for Genlock Mode, Sensitivity and Shutter'''. The Shutter value must not limit frame rate. You are responsible for configuring each camera; there is no configuration information flowing over the genlock cable. Incorrect camera configuration will lead to unusable videos.<br />
<br />
Should you wish to violate the above and have different settings, you must always guarantee the following:<br />
# What ever the settings (exposure, resolution etc) the receiver must be able to keep up with the source. Just because you set the source to 1000 fps doesn't magically make an SC1 set to 1280x1024 able to run faster that 494 fps.<br />
# The receiver must not be set for more post-trigger frames than the source. The source camera only sends genlock timing pulses when it is filling the pre and post trigger buffers. When the source post-trigger buffer is filled, the source will stop sending pulses until it is ready for the next capture. Until then, if a genlock receiver still wants to capture more post-trigger frames, it will loose genlock sync.<br />
<br />
= Cabling =<br />
<br />
[[File:Edgertronic-2-cameras-with-genlock-cable.jpeg|300px|right]]<br />
<br />
The frame timing and trigger are sent on a single wire in the IO (external trigger) connector. In the simplest setup, with one source and one receiver camera, all that is needed is a 3 conductor, 2.5mm male to 2.5mm male, patch cord cable to connect the cameras. The cable was included with the camera. If you need another [https://www.amazon.com/TNP-2-5mm-Audio-Cable-6FT/dp/B0742MMXVW 3 conductor 2.5mm male to 2.5mm male patch cord cable] they are commonly available.<br />
<br />
You can also use an accessory product, the [[Genlock Adapter]] for longer cabling runs and/or supporting more than 2 genlocked cameras.<br />
<br />
= Initial configuration =<br />
<br />
The cameras need to have unique IP addresses. If you are not using DHCP, you can buy a simple 4-port network switch (such as a [https://www.amazon.com/D-Link-Unmanaged-Metal-Desktop-Switch/dp/B007DHWPR4 D-Link DES-105 5 Port 10/100 Network Switch] or the [https://www.amazon.com/D-Link-Gigabit-Unmanaged-Desktop-DGS-105/dp/B000BC7QMM D-Link DGS-105 5 Port Gigabit Network Switch]. For initial configuration, plug the PC and one receiver camera into the network switch. Leave the PC configured for network IP address 10.11.12.1 and the genlock source camera configured for network IP address 10.11.12.13. Each receiver camera needs a different, unique IP address, such as 10.11.12.14, 10.11.12.15, etc. Refer to the [[Ethernet networking|network setup]] instructions to see how to set each receiver camera's IP address. Remember you can only have one receiver connected at a time when setting the camera's IP address. Once all receiver cameras have been configured, plug all cameras into the network switch. There is no need to power off the cameras.<br />
<br />
Browse to the camera you chose to generate the genlock source; from the above instructions it will be at http://10.11.12.13. Configure the camera settings with genlock being configured as '''source'''. Then browse to each receiver camera and configure each one with identical settings, except of course set genlock to '''receiver'''. Once configured and properly cabled, look at the LEDs on all cameras to verify all cameras are in the run state (solid green camera LED). A blinking red camera LED indicates that the receiver camera isn't receiving a genlock signal.<br />
<br />
Once the cameras are wired, configured, and triggerable, go ahead and trigger the source camera to verify the receiver cameras are responding to the trigger. During capture, look at the camera LED on each camera and verify none of them are blinking white. A blinking red/white camera LED indicates that camera was not able to maintain genlock. Check each captured video to verify the results are what you expected.<br />
<br />
= Customize camera settings =<br />
<br />
Browse to each receiver camera and adjust the settings. There are setting limitations:<br />
<br />
* For one-to-one frame lock, each receiver has to be set to a frame rate greater than or equal to the source's frame rate.<br />
<br />
* The number of post-trigger frames (Frame_rate * Duration * (1 - Pretrigger_percentage/100) must be greater than the source's post-trigger frames. Once a source has filled it's post-trigger buffer, it will momentarily stop sending genlock signals and this will make the receivers unhappy if they still expect to capture more post-trigger frames.<br />
<br />
Once all the camera settings are adjusted, do another trial video capture to verify the timing is compatible. The [[metadata file]] will indicate if each receiver camera was able to maintain genlock.<br />
<br />
Additional technical details, if you are trying different settings on different cameras:<br />
<br />
* Post trigger duration of the source camera has to be as long as the longest post trigger duration of one of the receiver cameras.<br />
* Frame rate must be the same on all cameras<br />
* Resolution can be different<br />
<br />
= Metadata file =<br />
<br />
There are two genlock related settings in each captured video [[metadata file]].<br />
<br />
{| class="wikitable"<br />
! Key !! Value !! Meaning<br />
|-<br />
| Genlock || Off<br>Source<br>Receiver<br>External || Camera genlock setting.<br />
|-<br />
| Genlock locked || True<br>False<br>None || Indicates if the receiver camera was able to maintain genlock throughout the video capture. Set to ''NA'' if camera is not configured as a genlock receiver.<br />
|}<br />
<br />
There are several metadata file keys whose meaning can be effected when genlock is enabled:<br />
<br />
{| class="wikitable"<br />
! Key !! Meaning<br />
|-<br />
| Frame rate || The source is slowed down slightly to make sure it doesn’t over-run the receiver. All that is required is the source period must be greater than or equal to the min period that the receiver can run at given the receiver's allowed settings. This includes a little fudge factor for clock frequency error and jitter due to cabling.<br />
|-<br />
| Frame count || The receiver may capture more frames than the source, even with identical settings, because of the frame rate adjustment that ensures both cameras stay synchronized.<br />
|-<br />
| Trigger delay || For the receiver camera, trigger delay is meaningless. Trigger delay is reported correctly for when the camera is configured for external genlock.<br />
|}<br />
<br />
= Camera Settings =<br />
<br />
The genlock setting is stored on the camera so the value is used the next time you power on the camera. The value is also available in the metadata file. The possible Genlock values are described below:<br />
<br />
{| class="wikitable"<br />
! Key !! Value !! Meaning<br />
|-<br />
| rowspan="4" | Genlock || Off || Camera will respond to trigger events as normal and generate its own start-of-exposure timing signal.<br />
|-<br />
| Source || Camera will provide both the genlock trigger and genlock start-of-exposure signals on the [[Capture_a_video_by_triggering_the_camera#External_trigger_connector | External trigger connector]].<br />
|-<br />
| Receiver || Camera will get trigger and start-of-exposure from the [[Capture_a_video_by_triggering_the_camera#External_trigger_connector | External trigger connector]].<br />
|-<br />
| External || An external timing source is providing the start-of-exposure signal. The external trigger signal can be used to trigger the camera when configured for external genlock.<br />
|}<br />
<br />
= Genlock status reporting =<br />
<br />
Only the genlock receiver will report genlock status information, specifically genlock receiver timing error. If the receiver camera is in genlock, then normal camera status information is provided. Genlock status reporting is provided via the LEDs and CAMAPI <tt>get_camstatus()</tt> API.<br />
<br />
== LEDs ==<br />
<br />
When a receiver camera is unable to maintain genlock the camera LED will blink red/white. Once the receiver camera is maintaining genlock, the camera LED will stop blinking white after a five second timeout. Entering the calibrating or saving state will clear the blinking white camera LED.<br />
<br />
If the receiver camera doesn't detect any genlock signal, the camera LED will blink red.<br />
<br />
== Automation using CAMAPI ==<br />
<br />
The original design concept for genlock was the same for the camera itself - capturing critical videos to get insight into the physical world. The video capture process was anticipated to be through the web user interface and a simple trigger button, which is all that is needed to manually capture a handful of videos. Once the edgertronic camera was integrated into more complex workflows, the use model changed, and additional aspects of the camera's overall operation need to be taken into account.<br />
<br />
Although the cameras are in lock-step synchronization for triggering and capturing each frame, there are other aspects of the cameras that are operating asynchronously.<br />
<br />
* Local storage - an SD card may be full on one camera, but not on another camera.<br />
* SD card performance is difficult to predict. Even two SD cards in the same packaging bought at the same time might have noticeable different performance characteristics. A new SD card tends to be noticable faster until all the underlying NAND storage has been used once, then the needed erase cycles cause the performance to drop.<br />
* The time it takes a camera to finish the current capture and start putting pre-trigger frames in the next buffer can also be affected by the SD card performance.<br />
<br />
If the receiver camera is experiencing a genlock timing error, the CAMAPI <tt>get_camstatus()</tt> API returns a dictionary containing the '''flags''' keyword with <tt>CAMAPI_FLAG_RECEIVER_GENLOCK_ERROR</tt> flag (0x40000) bit is set. Once the receiver camera is maintaining genlock, the <tt>CAMAPI_FLAG_RECEIVER_GENLOCK_ERROR</tt> flag is cleared after a five second timeout. In addition, entering the calibrating or saving state will clear the <tt>CAMAPI_FLAG_RECEIVER_GENLOCK_ERROR</tt> flag.<br />
<br />
= Feature interaction =<br />
<br />
* If camera [[SDK - Serial console|Serial console]] is enabled, genlock setting is forced to ''Off''.<br />
* If camera is configured as a genlock receiver, camera will ignore all trigger local events (web UI trigger button, multi-function button, CAMAPI <tt>trigger()</tt> invocation, etc); expecting a genlock trigger over the [[Capture_a_video_by_triggering_the_camera#External_trigger_connector|External trigger connector]].<br />
* Each camera generates its own timing for the calibration cycle.<br />
* When configured as a receiver camera, the camera will only use the receiver camera's frame rate setting if the genlock start-of-exposure signal is not detected for 100 ms.<br />
<br />
=False Triggers=<br />
<br />
*Plugging in genlock cable may trigger both source and receiver cameras.<br />
*Unplugging genlock cable may trigger both source and receiver cameras.<br />
*Powering off a genlocked camera may trigger any other connected cameras.<br />
= Metadata file =<br />
<br />
The metadata file on the source camera will contain an entry indicating the delay from the incoming trigger and the start of the first frame following the trigger. The trigger delay value in the metadata file created by the receiver cameras is meaningless.<br />
<br />
= File naming =<br />
<br />
There are several CAMAPI mechanisms (<tt>trigger()</tt>, <tt>rename_last_video()</tt>, <tt>selective_save()</tt>) to allow you to name the file when you trigger the camera. How should filenaming be for handling for a group of genlocked cameras? <br />
<br />
Example: A data acquisition system includes multiple genlocked cameras to capture an event where the event, such as a baseball pitch. The data acquisition system creates a unique GUID identifier for each event. The need is to include the GUID in the filename to enable post event data correlation.<br />
<br />
The recommended design is to separate the camera trigger from assigning the filename. Under most conditions, sending the CAMAPI <tt>rename_last_video()</tt> using <tt>stage</tt>set to <tt>RENAME_STAGE_TRIGGERED</tt> right after triggering the genlocked cameras to each of the cameras in the genlock group.<br />
<br />
= Technical description =<br />
<br />
* For genlock to function properly, the receiver camera(s) must support a frame rate greater or equal to the source camera's output frame rate. The easiest way to ensure this is to use identical settings on all cameras. The source camera will decrease its maximum allowed frame rate slightly to allow for the slight clock differences that are possible between the source and receiver cameras.<br />
<br />
= Signaling =<br />
<br />
The camera can be triggered from three sources:<br />
<br />
* UI / API - trigger via CAMAPI <tt>trigger()</tt> API.<br />
* [[Multi-function button]] on back of camera<br />
* Tip of 2.5mm phone jack ([[Trigger#External_trigger_connector|I/O connector]])<br />
* If using a [[Genlock Adapter]] - tip of the 2.5mm phone jack labeled '''TRIGGER IN'''.<br />
<br />
The UI / CAMAPI <tt>trigger()</tt> API, Multi-function button, and Tip of phone jack are logically or-ed together and only the source camera responds to the trigger event.<br />
<br />
{| class="wikitable"<br />
! 2.5mm phone<br>jack signal !! Genlock<br>OFF mode !! Genlock<br>SOURCE mode !! Genlock<br>RECEIVER mode !! Genlock<br>EXTERNAL mode<br />
|-<br />
| align="center" | tip || align="center" | trigger input signal<br>(3.3V LVCMOS, active low) || align="center" | trigger input signal<br>(3.3V LVCMOS, active low) || align="center" | unused || align="center" | trigger input signal<br>(3.3V LVCMOS, active low)<br />
|-<br />
| align="center" | ring || align="center" | unused || align="center" | genlock output signal to receiver camera(s)<br>(combined frame start and trigger) || align="center" | frame start input signal<br>(3.3V LVCMOS, active low) || align="center" | frame start input signal<br>(3.3V LVCMOS, active low)<br />
|-<br />
| align="center" | sleeve || colspan="4" align="center" | ground<br />
|}<br />
<br />
You can use the cable listed below to genlock two cameras (which does not support external trigger via wired remote).<br />
<br />
* Steren 252-612 12' 2.5mm Male to 2.5mm Male<br />
* Generic [https://www.amazon.com/TNP-2-5mm-Audio-Cable-6FT/dp/B0742MMXVW 3 conductor 2.5mm male to 2.5mm male patch cord cable]<br />
<br />
We recommend if you are going to genlock more than 2 cameras, use [[Genlock Adapter|Genlock Adapters]]. If you are good with a soldering iron, you can make your own custom cable. To genlock 3 to 5 cameras, or to use an switch closure to trigger the source, you will need to make your own cable. To have an external trigger, you will need to make a custom cable. The signalling from the source genlock camera can drive up to 4 receiver genlock cameras. If you need to connect more than 4 receiver genlock cameras, then you will need to increase the drive by using an active cable with a 3.3V LVCMOS buffer. You can get 5v from the USB port, but will need to step it down to 3.3V before powering the VLCMOS buffer. If this doesn't make sense to you, then please use [[Genlock Adapter|Genlock Adapters]] to drive more than 4 receiver genlock cameras.<br />
<br />
<span style="color:purple"><br />
'''If you are making your own cable, note that the wired trigger we supply will ground both the tip and ring when the button is pressed. Connect the trigger's tip and sleeve to the source camera tip and sleeve respectively, and DO NOT connect the trigger ring to either camera. Look at the table above and you'll understand why.'''<br />
</span> <br><br />
<br />
<span style="color:purple"><br />
'''If you are having problems with your cabling, test out genlock using the supplied 12' 2.5mm male-male genlock camera between the source camera and one receiver camera. Setup the two cameras in the UI, and then trigger the source from the UI. Both cameras should be synced and trigger at the same instant. If you are having problems when using your custom cabling, please check your wiring.'''<br />
<br />
</span><br />
<br />
Electrically, the tip and ring are identical circuits, with the exception that the tip is always an input, while the ring is an output on the genlock source camera and an input on all genlock receiver cameras. Each signal has a 4.7K pullup to 3.3V, followed by a 165 Ohm series resistor. The other end of the 165 Ohm resistor connects to an ESD diode clamping array (GND and 3.3V), an FPGA 3.3V LVCMOS GPIO and an SOC LVCMOS 3.3V IO.<br />
<br />
{{Genlock Adapter}}<br />
<br />
[[Category:Features]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Edgertronic_multi-camera_synchronization_genlock&diff=5682Edgertronic multi-camera synchronization genlock2024-01-26T18:53:45Z<p>Tfischer: /* Metadata file */</p>
<hr />
<div>Note: Genlock terminology changed in software release v2.5.2. The change was done in a way that is backwards compatible for controlling applications that use CAMAPI.<br />
<br />
= Overview =<br />
<br />
[[File:Genlock circled.png|400px|thumb|right|Genlock settings]]<br />
<br />
Genlock is a mode where a source camera provides frame start and trigger signals to one or more receiver cameras. The source and receiver cameras capture frames at the at the exact same rate, with the start of exposure synchronized to within +/- 1 uS of each other. Trigger events are processed by the source and distributed to the receivers in manner that causes the source and receiver camera to trigger on the exact same frame.<br />
<br />
For each frame that is captured, the cameras go through three phases: exposure, frame data readout and idle. As long as the receiver camera has a positive idle time the receiver will stay in genlock with the source. During the exposure phase and frame data readout phase a receiver camera will ignore start-of-exposure signals from the source.<br />
<br />
There are limitations:<br />
<br />
* All cameras have to be configured for compatible timing (see below). For one-to-one frame lock, each receiver has to be set to a frame rate that is greater than or equal to the source's frame rate.<br />
* The user is responsible to make sure all cameras can be triggered before issuing a trigger. There is no automated check that verifies whether or not a receiver camera has completed saving the previously captured video, finished the calibrate operation, has available storage, etc.<br />
<br />
'''All cameras must have identical settings except for Genlock Mode, Sensitivity and Shutter'''. The Shutter value must not limit frame rate. You are responsible for configuring each camera; there is no configuration information flowing over the genlock cable. Incorrect camera configuration will lead to unusable videos.<br />
<br />
Should you wish to violate the above and have different settings, you must always guarantee the following:<br />
# What ever the settings (exposure, resolution etc) the receiver must be able to keep up with the source. Just because you set the source to 1000 fps doesn't magically make an SC1 set to 1280x1024 able to run faster that 494 fps.<br />
# The receiver must not be set for more post-trigger frames than the source. The source camera only sends genlock timing pulses when it is filling the pre and post trigger buffers. When the source post-trigger buffer is filled, the source will stop sending pulses until it is ready for the next capture. Until then, if a genlock receiver still wants to capture more post-trigger frames, it will loose genlock sync.<br />
<br />
= Cabling =<br />
<br />
[[File:Edgertronic-2-cameras-with-genlock-cable.jpeg|300px|right]]<br />
<br />
The frame timing and trigger are sent on a single wire in the IO (external trigger) connector. In the simplest setup, with one source and one receiver camera, all that is needed is a 3 conductor, 2.5mm male to 2.5mm male, patch cord cable to connect the cameras. The cable was included with the camera. If you need another [https://www.amazon.com/TNP-2-5mm-Audio-Cable-6FT/dp/B0742MMXVW 3 conductor 2.5mm male to 2.5mm male patch cord cable] they are commonly available.<br />
<br />
You can also use an accessory product, the [[Genlock Adapter]] for longer cabling runs and/or supporting more than 2 genlocked cameras.<br />
<br />
= Initial configuration =<br />
<br />
The cameras need to have unique IP addresses. If you are not using DHCP, you can buy a simple 4-port network switch (such as a [https://www.amazon.com/D-Link-Unmanaged-Metal-Desktop-Switch/dp/B007DHWPR4 D-Link DES-105 5 Port 10/100 Network Switch] or the [https://www.amazon.com/D-Link-Gigabit-Unmanaged-Desktop-DGS-105/dp/B000BC7QMM D-Link DGS-105 5 Port Gigabit Network Switch]. For initial configuration, plug the PC and one receiver camera into the network switch. Leave the PC configured for network IP address 10.11.12.1 and the genlock source camera configured for network IP address 10.11.12.13. Each receiver camera needs a different, unique IP address, such as 10.11.12.14, 10.11.12.15, etc. Refer to the [[Ethernet networking|network setup]] instructions to see how to set each receiver camera's IP address. Remember you can only have one receiver connected at a time when setting the camera's IP address. Once all receiver cameras have been configured, plug all cameras into the network switch. There is no need to power off the cameras.<br />
<br />
Browse to the camera you chose to generate the genlock source; from the above instructions it will be at http://10.11.12.13. Configure the camera settings with genlock being configured as '''source'''. Then browse to each receiver camera and configure each one with identical settings, except of course set genlock to '''receiver'''. Once configured and properly cabled, look at the LEDs on all cameras to verify all cameras are in the run state (solid green camera LED). A blinking red camera LED indicates that the receiver camera isn't receiving a genlock signal.<br />
<br />
Once the cameras are wired, configured, and triggerable, go ahead and trigger the source camera to verify the receiver cameras are responding to the trigger. During capture, look at the camera LED on each camera and verify none of them are blinking white. A blinking red/white camera LED indicates that camera was not able to maintain genlock. Check each captured video to verify the results are what you expected.<br />
<br />
= Customize camera settings =<br />
<br />
Browse to each receiver camera and adjust the settings. There are setting limitations:<br />
<br />
* For one-to-one frame lock, each receiver has to be set to a frame rate greater than or equal to the source's frame rate.<br />
<br />
* The number of post-trigger frames (Frame_rate * Duration * (1 - Pretrigger_percentage/100) must be greater than the source's post-trigger frames. Once a source has filled it's post-trigger buffer, it will momentarily stop sending genlock signals and this will make the receivers unhappy if they still expect to capture more post-trigger frames.<br />
<br />
Once all the camera settings are adjusted, do another trial video capture to verify the timing is compatible. The [[metadata file]] will indicate if each receiver camera was able to maintain genlock.<br />
<br />
Additional technical details, if you are trying different settings on different cameras:<br />
<br />
* Post trigger duration of the source camera has to be as long as the longest post trigger duration of one of the receiver cameras.<br />
* Frame rate must be the same on all cameras<br />
* Resolution can be different<br />
<br />
= Metadata file =<br />
<br />
There are two genlock related settings in each captured video [[metadata file]].<br />
<br />
{| class="wikitable"<br />
! Key !! Value !! Meaning<br />
|-<br />
| Genlock || Off<br>Source<br>Receiver<br>External || Camera genlock setting.<br />
|-<br />
| Genlock locked || True<br>False<br>None || Indicates if the receiver camera was able to maintain genlock throughout the video capture. Set to ''NA'' if camera is not configured as a genlock receiver.<br />
|}<br />
<br />
There are several metadata file keys whose meaning can be effected when genlock is enabled:<br />
<br />
{| class="wikitable"<br />
! Key !! Meaning<br />
|-<br />
| Frame rate || The source is slowed down slightly to make sure it doesn’t over-run the receiver. All that is required is the source period must be greater than or equal to the min period that the receiver can run at given the receiver's allowed settings. This includes a little fudge factor for clock frequency error and jitter due to cabling.<br />
|-<br />
| Frame count || The receiver may capture more frames than the source, even with identical settings, because of the frame rate adjustment that ensures both cameras stay synchronized.<br />
|-<br />
| Trigger delay || For the receiver camera, trigger delay is meaningless. Trigger delay is reported correctly for when the camera is configured for external genlock.<br />
|}<br />
<br />
= Camera Settings =<br />
<br />
The genlock setting is stored on the camera so the value is used the next time you power on the camera. The value is also available in the metadata file. The possible Genlock values are described below:<br />
<br />
{| class="wikitable"<br />
! Key !! Value !! Meaning<br />
|-<br />
| rowspan="4" | Genlock || Off || Camera will respond to trigger events as normal and generate its own start-of-exposure timing signal.<br />
|-<br />
| Source || Camera will provide both the genlock trigger and genlock start-of-exposure signals on the [[Capture_a_video_by_triggering_the_camera#External_trigger_connector | External trigger connector]].<br />
|-<br />
| Receiver || Camera will get trigger and start-of-exposure from the [[Capture_a_video_by_triggering_the_camera#External_trigger_connector | External trigger connector]].<br />
|-<br />
| External || An external timing source is providing the start-of-exposure signal. The external trigger signal can be used to trigger the camera when configured for external genlock.<br />
|}<br />
<br />
= Genlock status reporting =<br />
<br />
Only the genlock receiver will report genlock status information, specifically genlock receiver timing error. If the receiver camera is in genlock, then normal camera status information is provided. Genlock status reporting is provided via the LEDs and CAMAPI <tt>get_camstatus()</tt> API.<br />
<br />
== LEDs ==<br />
<br />
When a receiver camera is unable to maintain genlock the camera LED will blink red/white. Once the receiver camera is maintaining genlock, the camera LED will stop blinking white after a five second timeout. Entering the calibrating or saving state will clear the blinking white camera LED.<br />
<br />
If the receiver camera doesn't detect any genlock signal, the camera LED will blink red.<br />
<br />
== Automation using CAMAPI ==<br />
<br />
The original design concept for genlock was the same for the camera itself - capturing critical videos to get insight into the physical world. The video capture process was anticipated to be through the web user interface and a simple trigger button, which is all that is needed to manually capture a handful of videos. Once the edgertronic camera was integrated into more complex workflows, the use model changed, and additional aspects of the camera's overall operation need to be taken into account.<br />
<br />
Although the cameras are in lock-step synchronization for triggering and capturing each frame, there are other aspects of the cameras that are operating asynchronously.<br />
<br />
* Local storage - an SD card may be full on one camera, but not on another camera.<br />
* SD card performance is difficult to predict. Even two SD cards in the same packaging bought at the same time might have noticeable different performance characteristics. A new SD card tends to be noticable faster until all the underlying NAND storage has been used once, then the needed erase cycles cause the performance to drop.<br />
* The time it takes a camera to finish the current capture and start putting pre-trigger frames in the next buffer can also be affected by the SD card performance.<br />
<br />
If the receiver camera is experiencing a genlock timing error, the CAMAPI <tt>get_camstatus()</tt> API returns a dictionary containing the '''flags''' keyword with <tt>CAMAPI_FLAG_RECEIVER_GENLOCK_ERROR</tt> flag (0x40000) bit is set. Once the receiver camera is maintaining genlock, the <tt>CAMAPI_FLAG_RECEIVER_GENLOCK_ERROR</tt> flag is cleared after a five second timeout. In addition, entering the calibrating or saving state will clear the <tt>CAMAPI_FLAG_RECEIVER_GENLOCK_ERROR</tt> flag.<br />
<br />
= Feature interaction =<br />
<br />
* If camera [[SDK - Serial console|Serial console]] is enabled, genlock setting is forced to ''Off''.<br />
* If camera is configured as a genlock receiver, camera will ignore all trigger local events (web UI trigger button, multi-function button, CAMAPI <tt>trigger()</tt> invocation, etc); expecting a genlock trigger over the [[Capture_a_video_by_triggering_the_camera#External_trigger_connector|External trigger connector]].<br />
* Each camera generates its own timing for the calibration cycle.<br />
* When configured as a receiver camera, the camera will only use the receiver camera's frame rate setting if the genlock start-of-exposure signal is not detected for 100 ms.<br />
<br />
=False Triggers=<br />
<br />
*Plugging in genlock cable may trigger both source and receiver cameras.<br />
*Unplugging genlock cable may trigger both source and receiver cameras.<br />
*Powering off a genlocked camera may trigger any other connected cameras.<br />
= Metadata file =<br />
<br />
The metadata file on the source camera will contain an entry indicating the delay from the incoming trigger and the start of the first frame following the trigger. The trigger delay value in the metadata file created by the receiver cameras is meaningless.<br />
<br />
= File naming =<br />
<br />
There are several CAMAPI mechanisms to allow you to name the file when you trigger the camera. Those mechanisms work for the genlock source camera. How should filename be for handling for the downstream genlock receiver cameras? <br />
<br />
Example: Multiple cameras are setup to capture an event where the event is given a GUID identifier. The goal is for every camera in the genlock group would save the video and metadata files containing the GUID.<br />
<br />
The event could be a baseball pitch, where there is a genlock source camera that gets triggered when the event occurs, with genlock used to cause the rest of the cameras in the genlock group to trigger at the same time.<br />
<br />
= Technical description =<br />
<br />
* For genlock to function properly, the receiver camera(s) must support a frame rate greater or equal to the source camera's output frame rate. The easiest way to ensure this is to use identical settings on all cameras. The source camera will decrease its maximum allowed frame rate slightly to allow for the slight clock differences that are possible between the source and receiver cameras.<br />
<br />
= Signaling =<br />
<br />
The camera can be triggered from three sources:<br />
<br />
* UI / API - trigger via CAMAPI <tt>trigger()</tt> API.<br />
* [[Multi-function button]] on back of camera<br />
* Tip of 2.5mm phone jack ([[Trigger#External_trigger_connector|I/O connector]])<br />
* If using a [[Genlock Adapter]] - tip of the 2.5mm phone jack labeled '''TRIGGER IN'''.<br />
<br />
The UI / CAMAPI <tt>trigger()</tt> API, Multi-function button, and Tip of phone jack are logically or-ed together and only the source camera responds to the trigger event.<br />
<br />
{| class="wikitable"<br />
! 2.5mm phone<br>jack signal !! Genlock<br>OFF mode !! Genlock<br>SOURCE mode !! Genlock<br>RECEIVER mode !! Genlock<br>EXTERNAL mode<br />
|-<br />
| align="center" | tip || align="center" | trigger input signal<br>(3.3V LVCMOS, active low) || align="center" | trigger input signal<br>(3.3V LVCMOS, active low) || align="center" | unused || align="center" | trigger input signal<br>(3.3V LVCMOS, active low)<br />
|-<br />
| align="center" | ring || align="center" | unused || align="center" | genlock output signal to receiver camera(s)<br>(combined frame start and trigger) || align="center" | frame start input signal<br>(3.3V LVCMOS, active low) || align="center" | frame start input signal<br>(3.3V LVCMOS, active low)<br />
|-<br />
| align="center" | sleeve || colspan="4" align="center" | ground<br />
|}<br />
<br />
You can use the cable listed below to genlock two cameras (which does not support external trigger via wired remote).<br />
<br />
* Steren 252-612 12' 2.5mm Male to 2.5mm Male<br />
* Generic [https://www.amazon.com/TNP-2-5mm-Audio-Cable-6FT/dp/B0742MMXVW 3 conductor 2.5mm male to 2.5mm male patch cord cable]<br />
<br />
We recommend if you are going to genlock more than 2 cameras, use [[Genlock Adapter|Genlock Adapters]]. If you are good with a soldering iron, you can make your own custom cable. To genlock 3 to 5 cameras, or to use an switch closure to trigger the source, you will need to make your own cable. To have an external trigger, you will need to make a custom cable. The signalling from the source genlock camera can drive up to 4 receiver genlock cameras. If you need to connect more than 4 receiver genlock cameras, then you will need to increase the drive by using an active cable with a 3.3V LVCMOS buffer. You can get 5v from the USB port, but will need to step it down to 3.3V before powering the VLCMOS buffer. If this doesn't make sense to you, then please use [[Genlock Adapter|Genlock Adapters]] to drive more than 4 receiver genlock cameras.<br />
<br />
<span style="color:purple"><br />
'''If you are making your own cable, note that the wired trigger we supply will ground both the tip and ring when the button is pressed. Connect the trigger's tip and sleeve to the source camera tip and sleeve respectively, and DO NOT connect the trigger ring to either camera. Look at the table above and you'll understand why.'''<br />
</span> <br><br />
<br />
<span style="color:purple"><br />
'''If you are having problems with your cabling, test out genlock using the supplied 12' 2.5mm male-male genlock camera between the source camera and one receiver camera. Setup the two cameras in the UI, and then trigger the source from the UI. Both cameras should be synced and trigger at the same instant. If you are having problems when using your custom cabling, please check your wiring.'''<br />
<br />
</span><br />
<br />
Electrically, the tip and ring are identical circuits, with the exception that the tip is always an input, while the ring is an output on the genlock source camera and an input on all genlock receiver cameras. Each signal has a 4.7K pullup to 3.3V, followed by a 165 Ohm series resistor. The other end of the 165 Ohm resistor connects to an ESD diode clamping array (GND and 3.3V), an FPGA 3.3V LVCMOS GPIO and an SOC LVCMOS 3.3V IO.<br />
<br />
{{Genlock Adapter}}<br />
<br />
[[Category:Features]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Most_common_support_question&diff=5681Most common support question2024-01-20T03:48:05Z<p>Tfischer: Created page with "I probably should have called this page ''common support programs'' but we received the same question from several organizations this past week. Short form of the question: '..."</p>
<hr />
<div>I probably should have called this page ''common support programs'' but we received the same question from several organizations this past week.<br />
<br />
Short form of the question: '''My laptop can't talk to the camera'''<br />
<br />
Fun variations of the question:<br />
<br />
* I started a new job and my laptop can't talk to the camera someone handed to me.<br />
* My camera worked great, then I didn't use it for a year and now my laptop can't talk to the camera.<br />
<br />
My variation of the question:<br />
<br />
* I didn't read all your great documentation, so will you paste it into an email for me?<br />
<br />
That's rude and a bit unfair. The documentation was written by someone nerdy with in-depth knowledge of TCP/IP networking and used by a person who has better things to worry about.<br />
<br />
Short form of the answer: '''Configure your laptop's network settings'''<br />
<br />
The actual answer is dependent on your specific configuration.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5680Captured video queue control2024-01-16T16:13:31Z<p>Tfischer: /* Processing status changes */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that no longer reflect their actual meaning. Had a more insightful engineer anticipated all the cool things you can do with a high-speed camera, they would have picked more generic names in the first place.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered (meaning the first captured video after <tt>run()</tt> menthod is invoked will have an capture ID / ''buffer_nummber'' of 1).<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest. Those 600ms do not need to be saved, thus use CAMAPI <tt>selective_save()</tt> to specify the starting and ending frames to save.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI <tt>get_camstatus()</tt> method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
saving = False<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if saving and new_status.get('save_state') == CAMAPI_SAVE_STATE_IDLE:<br />
saving |= process_save_complete(cam, new_status)<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
saving = process_capture_complete(cam, new_status, saving)<br />
if new_status.get('flags') != status.get('flags'):<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
The above using polling, so you have to ask yourself if it is possible to miss something important because it occurs and clears in less than the polling time. For example, if you were monitoring if the camera capture state is ''triggered'', you could miss this even as the camera could be become trigger and then fill the post trigger buffer and again change state in less than the polling time. The logic shown above will work independent of the sleep time. The caveat is that by monitoring the flags, you could miss an error indication if the error self corrects in less than the poll time. Such an example could be the camera storage becoming full and then more space becoming available due to the controlling device deleting a saved video after it was copied to long term storage.<br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI <tt>run()</tt> method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting the oldest frame until a trigger occurs.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* If a DDR3 buffer is available, the camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger portion of the buffer in the DDR3. <br />
* The CAMAPI <tt>get_camstatus()</tt> method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI <tt>get_captured_video_info()</tt> returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in <tt>get_camstatus()</tt> active_buffer (really capture count), and once detected, the CAMAPI <tt>get_captured_video_info()</tt> method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status, saving):<br />
'''Return True if the camera starts saving a captured video.'''<br />
global vidque # dictionary keys are trigger times as integers<br />
saving = False<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
saving = controller_handle_new_videos(saving)<br />
return saving<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5679Captured video queue control2024-01-14T22:55:05Z<p>Tfischer: /* Unfortunate terminology */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that no longer reflect their actual meaning. Had a more insightful engineer anticipated all the cool things you can do with a high-speed camera, they would have picked more generic names in the first place.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered (meaning the first captured video after <tt>run()</tt> menthod is invoked will have an capture ID / ''buffer_nummber'' of 1).<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest. Those 600ms do not need to be saved, thus use CAMAPI <tt>selective_save()</tt> to specify the starting and ending frames to save.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI <tt>get_camstatus()</tt> method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
saving = False<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if saving and new_status.get('save_state') == CAMAPI_SAVE_STATE_IDLE:<br />
saving |= process_save_complete(cam, new_status)<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
saving = process_capture_complete(cam, new_status, saving)<br />
if new_status.get('flags') != status.get('flags'):<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
The above using polling, so you have to ask yourself if it is possible to miss something important because it occurs and clears in less than the polling time. For example, if you were monitoring if the camera capture state is ''triggered'', you could miss this even as the camera could be become trigger and then fill the post trigger buffer and again change state in less than the polling time. The logic shown above will work independent of the sleep time. The caveat is that by monitoring the flags, you could miss an error indication if the error self corrects in less than the poll time. Such an example could be the camera storage becoming full and then more space becoming available due to the controlling device deleting a saved video after it was copied to long term storage.<br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI <tt>run()</tt> method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting the oldest frame until a trigger occurs.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* If a DDR3 buffer is available, the camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger portion of the buffer in the DDR3. <br />
* The CAMAPI <tt>get_camstatus()</tt> method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI <tt>get_captured_video_info()</tt> returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in <tt>get_camstatus()</tt> active_buffer (really capture count), and once detected, the CAMAPI <tt>get_captured_video_info()</tt> method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status, saving):<br />
'''Return True if the camera starts saving a captured video.'''<br />
global vidque # dictionary keys are trigger times as integers<br />
saving = False<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
saving = controller_handle_new_videos(saving)<br />
return saving<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5678Captured video queue control2024-01-14T22:52:54Z<p>Tfischer: /* Camera monitoring */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered (meaning the first captured video after <tt>run()</tt> menthod is invoked will have an capture ID / ''buffer_nummber'' of 1).<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest. Those 600ms do not need to be saved, thus use CAMAPI <tt>selective_save()</tt> to specify the starting and ending frames to save.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI <tt>get_camstatus()</tt> method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
saving = False<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if saving and new_status.get('save_state') == CAMAPI_SAVE_STATE_IDLE:<br />
saving |= process_save_complete(cam, new_status)<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
saving = process_capture_complete(cam, new_status, saving)<br />
if new_status.get('flags') != status.get('flags'):<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
The above using polling, so you have to ask yourself if it is possible to miss something important because it occurs and clears in less than the polling time. For example, if you were monitoring if the camera capture state is ''triggered'', you could miss this even as the camera could be become trigger and then fill the post trigger buffer and again change state in less than the polling time. The logic shown above will work independent of the sleep time. The caveat is that by monitoring the flags, you could miss an error indication if the error self corrects in less than the poll time. Such an example could be the camera storage becoming full and then more space becoming available due to the controlling device deleting a saved video after it was copied to long term storage.<br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI <tt>run()</tt> method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting the oldest frame until a trigger occurs.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* If a DDR3 buffer is available, the camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger portion of the buffer in the DDR3. <br />
* The CAMAPI <tt>get_camstatus()</tt> method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI <tt>get_captured_video_info()</tt> returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in <tt>get_camstatus()</tt> active_buffer (really capture count), and once detected, the CAMAPI <tt>get_captured_video_info()</tt> method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status, saving):<br />
'''Return True if the camera starts saving a captured video.'''<br />
global vidque # dictionary keys are trigger times as integers<br />
saving = False<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
saving = controller_handle_new_videos(saving)<br />
return saving<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5677Captured video queue control2024-01-14T22:50:58Z<p>Tfischer: /* Processing capture state changes */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered (meaning the first captured video after <tt>run()</tt> menthod is invoked will have an capture ID / ''buffer_nummber'' of 1).<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest. Those 600ms do not need to be saved, thus use CAMAPI <tt>selective_save()</tt> to specify the starting and ending frames to save.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI <tt>get_camstatus()</tt> method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
saving = False<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
saving = process_capture_complete(cam, new_status)<br />
if saving and new_status.get('save_state') == CAMAPI_SAVE_STATE_IDLE:<br />
process_save_complete(cam, new_status)<br />
if new_status.get('flags') != status.get('flags'):<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
The above using polling, so you have to ask yourself if it is possible to miss something important because it occurs and clears in less than the polling time. For example, if you were monitoring if the camera capture state is ''triggered'', you could miss this even as the camera could be become trigger and then fill the post trigger buffer and again change state in less than the polling time. The logic shown above will work independent of the sleep time. The caveat is that by monitoring the flags, you could miss an error indication if the error self corrects in less than the poll time. Such an example could be the camera storage becoming full and then more space becoming available due to the controlling device deleting a saved video after it was copied to long term storage.<br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI <tt>run()</tt> method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting the oldest frame until a trigger occurs.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* If a DDR3 buffer is available, the camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger portion of the buffer in the DDR3. <br />
* The CAMAPI <tt>get_camstatus()</tt> method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI <tt>get_captured_video_info()</tt> returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in <tt>get_camstatus()</tt> active_buffer (really capture count), and once detected, the CAMAPI <tt>get_captured_video_info()</tt> method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
'''Return True if the camera starts saving a captured video.'''<br />
global vidque # dictionary keys are trigger times as integers<br />
saving = False<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
saving = controller_handle_new_videos()<br />
return saving<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5676Captured video queue control2024-01-14T22:47:57Z<p>Tfischer: /* Processing capture state changes */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered (meaning the first captured video after <tt>run()</tt> menthod is invoked will have an capture ID / ''buffer_nummber'' of 1).<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest. Those 600ms do not need to be saved, thus use CAMAPI <tt>selective_save()</tt> to specify the starting and ending frames to save.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI <tt>get_camstatus()</tt> method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
saving = False<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
saving = process_capture_complete(cam, new_status)<br />
if saving and new_status.get('save_state') == CAMAPI_SAVE_STATE_IDLE:<br />
process_save_complete(cam, new_status)<br />
if new_status.get('flags') != status.get('flags'):<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
The above using polling, so you have to ask yourself if it is possible to miss something important because it occurs and clears in less than the polling time. For example, if you were monitoring if the camera capture state is ''triggered'', you could miss this even as the camera could be become trigger and then fill the post trigger buffer and again change state in less than the polling time. The logic shown above will work independent of the sleep time. The caveat is that by monitoring the flags, you could miss an error indication if the error self corrects in less than the poll time. Such an example could be the camera storage becoming full and then more space becoming available due to the controlling device deleting a saved video after it was copied to long term storage.<br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI <tt>run()</tt> method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting the oldest frame until a trigger occurs.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* If a DDR3 buffer is available, the camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger portion of the buffer in the DDR3. <br />
* The CAMAPI <tt>get_camstatus()</tt> method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI <tt>get_captured_video_info()</tt> returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in <tt>get_camstatus()</tt> active_buffer (really capture count), and once detected, the CAMAPI <tt>get_captured_video_info()</tt> method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
global vidque # dictionary keys are trigger times as integers<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
controller_handle_new_videos()<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5675Captured video queue control2024-01-14T22:47:29Z<p>Tfischer: /* Camera monitoring */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered (meaning the first captured video after <tt>run()</tt> menthod is invoked will have an capture ID / ''buffer_nummber'' of 1).<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest. Those 600ms do not need to be saved, thus use CAMAPI <tt>selective_save()</tt> to specify the starting and ending frames to save.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI <tt>get_camstatus()</tt> method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
saving = False<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
saving = process_capture_complete(cam, new_status)<br />
if saving and new_status.get('save_state') == CAMAPI_SAVE_STATE_IDLE:<br />
process_save_complete(cam, new_status)<br />
if new_status.get('flags') != status.get('flags'):<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
The above using polling, so you have to ask yourself if it is possible to miss something important because it occurs and clears in less than the polling time. For example, if you were monitoring if the camera capture state is ''triggered'', you could miss this even as the camera could be become trigger and then fill the post trigger buffer and again change state in less than the polling time. The logic shown above will work independent of the sleep time. The caveat is that by monitoring the flags, you could miss an error indication if the error self corrects in less than the poll time. Such an example could be the camera storage becoming full and then more space becoming available due to the controlling device deleting a saved video after it was copied to long term storage.<br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI run() method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting the oldest frame until a trigger occurs.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* If a DDR3 buffer is available, the camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger portion of the buffer in the DDR3. <br />
* The CAMAPI <tt>get_camstatus()</tt> method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI <tt>get_captured_video_info()</tt> returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in <tt>get_camstatus()</tt> active_buffer (really capture count), and once detected, the CAMAPI <tt>get_captured_video_info()</tt> method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
global vidque # dictionary keys are trigger times as integers<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
controller_handle_new_videos()<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5674Captured video queue control2024-01-13T18:56:39Z<p>Tfischer: /* Processing capture state changes */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered (meaning the first captured video after <tt>run()</tt> menthod is invoked will have an capture ID / ''buffer_nummber'' of 1).<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest. Those 600ms do not need to be saved, thus use CAMAPI <tt>selective_save()</tt> to specify the starting and ending frames to save.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI <tt>get_camstatus()</tt> method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
process_capture_complete(cam, new_status)<br />
if :<br />
process_save_complete(cam, new_status)<br />
if :<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI run() method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting the oldest frame until a trigger occurs.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* If a DDR3 buffer is available, the camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger portion of the buffer in the DDR3. <br />
* The CAMAPI <tt>get_camstatus()</tt> method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI <tt>get_captured_video_info()</tt> returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in <tt>get_camstatus()</tt> active_buffer (really capture count), and once detected, the CAMAPI <tt>get_captured_video_info()</tt> method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
global vidque # dictionary keys are trigger times as integers<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
controller_handle_new_videos()<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5673Captured video queue control2024-01-13T18:52:32Z<p>Tfischer: /* Camera monitoring */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered (meaning the first captured video after <tt>run()</tt> menthod is invoked will have an capture ID / ''buffer_nummber'' of 1).<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest. Those 600ms do not need to be saved, thus use CAMAPI <tt>selective_save()</tt> to specify the starting and ending frames to save.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI <tt>get_camstatus()</tt> method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
process_capture_complete(cam, new_status)<br />
if :<br />
process_save_complete(cam, new_status)<br />
if :<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI run() method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting each time overwriting the oldest frame in the pre-trigger buffer.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* The camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger buffer in the DDR3 buffer that was previously empty. <br />
* The CAMAPI get_camstatus() method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI get_captured_video_info() returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in get_camstatus() active_buffer (really capture count), and once detected, the CAMAPI get_captured_video_info() method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
global vidque # dictionary keys are trigger times as integers<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
controller_handle_new_videos()<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5672Captured video queue control2024-01-13T18:51:47Z<p>Tfischer: /* Example manual save mode usage */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered (meaning the first captured video after <tt>run()</tt> menthod is invoked will have an capture ID / ''buffer_nummber'' of 1).<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest. Those 600ms do not need to be saved, thus use CAMAPI <tt>selective_save()</tt> to specify the starting and ending frames to save.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI get_camstatus() method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
process_capture_complete(cam, new_status)<br />
if :<br />
process_save_complete(cam, new_status)<br />
if :<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI run() method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting each time overwriting the oldest frame in the pre-trigger buffer.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* The camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger buffer in the DDR3 buffer that was previously empty. <br />
* The CAMAPI get_camstatus() method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI get_captured_video_info() returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in get_camstatus() active_buffer (really capture count), and once detected, the CAMAPI get_captured_video_info() method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
global vidque # dictionary keys are trigger times as integers<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
controller_handle_new_videos()<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5671Captured video queue control2024-01-13T18:49:49Z<p>Tfischer: /* Manual save interaction with other CAMAPI methods */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered (meaning the first captured video after <tt>run()</tt> menthod is invoked will have an capture ID / ''buffer_nummber'' of 1).<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI get_camstatus() method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
process_capture_complete(cam, new_status)<br />
if :<br />
process_save_complete(cam, new_status)<br />
if :<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI run() method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting each time overwriting the oldest frame in the pre-trigger buffer.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* The camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger buffer in the DDR3 buffer that was previously empty. <br />
* The CAMAPI get_camstatus() method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI get_captured_video_info() returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in get_camstatus() active_buffer (really capture count), and once detected, the CAMAPI get_captured_video_info() method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
global vidque # dictionary keys are trigger times as integers<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
controller_handle_new_videos()<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5670Captured video queue control2024-01-13T18:47:48Z<p>Tfischer: /* Identifying queued videos */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered (meaning the first captured video after <tt>run()</tt> menthod is invoked will have an capture ID / ''buffer_nummber'' of 1).<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|-<br />
| get_camstatus() || ||<br />
|-<br />
| get_camstatus() || ||<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI get_camstatus() method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
process_capture_complete(cam, new_status)<br />
if :<br />
process_save_complete(cam, new_status)<br />
if :<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI run() method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting each time overwriting the oldest frame in the pre-trigger buffer.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* The camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger buffer in the DDR3 buffer that was previously empty. <br />
* The CAMAPI get_camstatus() method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI get_captured_video_info() returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in get_camstatus() active_buffer (really capture count), and once detected, the CAMAPI get_captured_video_info() method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
global vidque # dictionary keys are trigger times as integers<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
controller_handle_new_videos()<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5669Captured video queue control2024-01-13T18:43:05Z<p>Tfischer: /* Unfortunate terminology */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt>selective_save()</tt> key: '''buffer_number''': the actual meaning is the capture number.<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered.<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|-<br />
| get_camstatus() || ||<br />
|-<br />
| get_camstatus() || ||<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI get_camstatus() method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
process_capture_complete(cam, new_status)<br />
if :<br />
process_save_complete(cam, new_status)<br />
if :<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI run() method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting each time overwriting the oldest frame in the pre-trigger buffer.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* The camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger buffer in the DDR3 buffer that was previously empty. <br />
* The CAMAPI get_camstatus() method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI get_captured_video_info() returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in get_camstatus() active_buffer (really capture count), and once detected, the CAMAPI get_captured_video_info() method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
global vidque # dictionary keys are trigger times as integers<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
controller_handle_new_videos()<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5668Captured video queue control2024-01-13T18:34:10Z<p>Tfischer: </p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, triggering the camera is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user via the web user interface, or a client application, controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked thus making the camera ready to capture more videos.<br />
<br />
The new '''manual''' save mode uses background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
* Deleted, including deleting all videos in the capture queue by calling the CAMAPI <tt>run()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt></tt>, <tt></tt>, <tt></tt> key: '''buffer_number''':<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered.<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|-<br />
| get_camstatus() || ||<br />
|-<br />
| get_camstatus() || ||<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI get_camstatus() method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
process_capture_complete(cam, new_status)<br />
if :<br />
process_save_complete(cam, new_status)<br />
if :<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI run() method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting each time overwriting the oldest frame in the pre-trigger buffer.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* The camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger buffer in the DDR3 buffer that was previously empty. <br />
* The CAMAPI get_camstatus() method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI get_captured_video_info() returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in get_camstatus() active_buffer (really capture count), and once detected, the CAMAPI get_captured_video_info() method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
global vidque # dictionary keys are trigger times as integers<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
controller_handle_new_videos()<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5667Captured video queue control2024-01-13T18:27:07Z<p>Tfischer: /* Background */</p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured, but not saved, videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - every trigger adds a new captured video. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI <tt>selective_save()</tt> method. The controlling external device uses CAMAPI <tt>delete_captured_videos()</tt> method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode and manual save mode, the entire captured video queue is emptied by calling CAMAPI <tt>run()</tt> method. <br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, the trigger is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user (via the web user interface) or a client application controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked making the camera ready to capture videos.<br />
<br />
The new '''manual''' save mode supports background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt></tt>, <tt></tt>, <tt></tt> key: '''buffer_number''':<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered.<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|-<br />
| get_camstatus() || ||<br />
|-<br />
| get_camstatus() || ||<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI get_camstatus() method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
process_capture_complete(cam, new_status)<br />
if :<br />
process_save_complete(cam, new_status)<br />
if :<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI run() method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting each time overwriting the oldest frame in the pre-trigger buffer.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* The camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger buffer in the DDR3 buffer that was previously empty. <br />
* The CAMAPI get_camstatus() method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI get_captured_video_info() returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in get_camstatus() active_buffer (really capture count), and once detected, the CAMAPI get_captured_video_info() method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
global vidque # dictionary keys are trigger times as integers<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
controller_handle_new_videos()<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Edgertronic_camera_software_recovery&diff=5666Edgertronic camera software recovery2023-12-29T00:53:46Z<p>Tfischer: </p>
<hr />
<div>A bricked camera is when the software on the '''micro SD card''' that runs the camera is no longer usable. The micro SD card is the small one that you normally don't remove in the recessed slot between the big SD card and the LEDs. If you lost power during a software update, you may have bricked your camera. If you think your camera is bricked for some other reason, please send an email to '''info@edgertronic.com''' with the details of what happened right before the camera stopped functioning correctly.<br />
<br />
= Get latest camera software =<br />
<br />
Download the microSD card image file:<br />
<br />
* <span style="color:purple">'''[https://www.edgertronic.com/releases/v2.5.3rc27/sdcard_image/sdcard.v2.5.3rc27.img.zip v2.5.3rc27 SD card image]'''</span> <br />
<br />
Unzip the downloaded file to get the microSD card image file.<br />
<br />
= Removing the micro SD card =<br />
<br />
The micro SD card is in the slot next to the big SD card. The micro SD card is recessed and in a spring loaded slot. You remove the micro SD card by gently pushing the card farther into the camera (just like you do with the big SD card) and then the micro SD card will pop out. You may find a paperclip is the right size to allow you to press the micro SD card farther into the camera).<br />
<br />
Be sure to press the micro SD card straight in (meaning perpendicular to the camera back) otherwise the card may hang up on the edge of the slot.<br />
<br />
'''It is important that you insert the micro-SD card in the correct orientation. The micro SD label faces the system and camera LEDs and the gold contacts face the big SD card. Incorrectly inserting or forcing the micro SD card will cause damage to the camera that is not covered under warranty.''' <br />
<br />
= Unbrick using Windows =<br />
<br />
== Set Up ==<br />
<br />
Before we can actually write to the micro SD card from a windows machine you need to download a Windows program that can image the contents of the image file directly over the entire micro SD card. There are several such programs to choose from.<br />
<br />
== Disk Imaging ==<br />
<br />
'''Using [https://www.balena.io/etcher BalenaEtcher] (pc)'''<br />
<br />
For years, we have been using BalenaEtcher on a Mac computer to burn the micro SD cards. I recently realized that BalenaEtcher runs on Windows as well. I tried it out using the following steps and was successful on a Windows 10 computer.<br />
<br />
# [https://www.balena.io/etcher BalenaEtcher Download] and install ''Download for Windows (x86|x64)''. I tested v1.5.116.<br />
# Install a micro SD card into your PC. I pressed cancel when asked if I wanted to format the card.<br />
# Download the zip file from inside the sdcard_image directory as described above.<br />
# Run BalenaEtcher<br />
## Select ''Flash from file''. The zip file you downloaded will likely be in the cleverly named Downloads directory on your Windows PC.<br />
## Select ''Select target''. I had to show hidden files. I installed an 8GB micro SD card, so I selected the device with SD in the name and a size 7.88GB. For some reason Balena had incorrectly identified the SD card as a system drive, so I had to click ''Yes, I'm sure'. If you are uncertian, exit Balena, remove the micro SD card, and run Balena again to see which device is no long in the select target list.<br />
## Wait until Balena reports ''Flash Complete!'' before removing the micro SD card.<br />
<br />
= Unbrick using Mac O.S.=<br />
<br />
'''Using [https://www.balena.io/etcher BalenaEtcher] (mac)'''<br />
<br />
For years, we have been using BalenaEtcher on a Mac computer to burn the micro SD cards.<br />
<br />
# [https://www.balena.io/etcher BalenaEtcher Download] and install ''Download for MAC''. I tested v1.5.86.<br />
# Install a micro SD card into your PC. I pressed cancel when asked if I wanted to format the card.<br />
# Download the zip file from inside the sdcard_image directory as described above.<br />
# Run BalenaEtcher<br />
## Select ''Flash from file''. The zip file you downloaded will likely be in the cleverly named Downloads directory on your Windows PC.<br />
## Select ''Select target''. I had to show hidden files. I installed an 8GB micro SD card, so I selected the device with SD in the name and a size 7.88GB. For some reason Balena had incorrectly identified the SD card as a system drive, so I had to click ''Yes, I'm sure'. If you are uncertian, exit Balena, remove the micro SD card, and run Balena again to see which device is no long in the select target list.<br />
## Wait until Balena reports ''Flash Complete!'' before removing the micro SD card.<br />
<br />
= Unbrick using Ubuntu =<br />
<br />
* Plug your micro SD card into the Ubuntu computer using the appropriate adaptor. Find the dev name for your micro SD card using command:<br />
<br />
<pre style="background:#d6e4f1"><br />
df<br />
</pre><br />
<br />
* Unmount the file system on the micro SD card using ''N'' where N is the number of the disk taken from the above command output i.e. for example, if the dev name is /dev/sdb1, replace N=1 in the command below:<br />
<br />
<pre style="background:#d6e4f1"><br />
umount /dev/sdbN<br />
</pre><br />
<br />
* Use the '''dd''' command to completely overwrite the contents of the microSD card. In the example below, the downloaded disk image is sdcard.20151216204804.img. Please change this as appropriate if you downloaded the image to a different location. Make sure to use the correct dev name. '''Suppose if the dev name is found /dev/sdb1, then use /dev/sdb in the 'dd' command (you need to ommit 'N') in the case of Ubuntu.'''<br />
<pre style="background:#d6e4f1"><br />
FILE=sdcard.20151216204804.img<br />
sudo dd bs=64M if=~/Downloads/$FILE of=/dev/sdb<br />
</pre><br />
<br />
* Now that the 'dd' has finished, run the 'sync' command and then unplug the microsd card from the Ubuntu system.<br />
<pre style="background:#d6e4f1"><br />
sync<br />
</pre><br />
<br />
= Reinstalling micro SD card into camera =<br />
<br />
[[Image:Inserting-micro-sd-card.jpg|300px|thumb|right]]<br />
<br />
Once you have an imaged micro SD insert it back into the camera. '''It is important that you insert the micro-SD card in the correct orientation. The micro SD label faces the system and camera LEDs and the gold contacts face the big SD card. Incorrectly inserting or forcing the micro SD card will cause damage to the camera that is not covered under warranty.''' Insert the micro SD card with the camera powered off. You can use a paperclip to gently push the micro SD card into the slot. Give the camera about a minute then the LEDs should be back on and the camera should update itself. If the image you used to update the camera was an older version of software you will need to conduct a software update manually after the camera finishes the re-image process.<br />
<br />
Simply take the newest software update (or desired software version's update) file and copy it directly onto the SD card(the big one), power on the camera and wait through the [[LEDs|LED]] “white pattern” as the camera updates.<br />
<br />
If the camera still does not work, try a [[Multi-function_button#Factory_reset|factory reset]].<br />
<br />
= Trying out a beta release =<br />
<br />
We are a rather open company. We use Open Source software. As much as practical, we make the camera's source code available. We work hard supporting CAMAPI so you can integrate the camera into your existing processes. We even make our buggy beta releases available for you to try out. Only we ask this one simple request in return. Please, please, please keep your fully tested micro SD card that came with the camera intact. Go buy another quality U10 class micro SD card to use when running the beta release software. That way, if the beta release causes more problems than it solves, you can simply swap out the micro SD card with the one that came with the camera and you are back in business.<br />
<br />
To see what beta release is available, browse to the [http://www.edgertronic.com/releases/ releases directory].<br />
<br />
Since you are going to be programming that brand new microSD card, first [[SDK_-_Developer_tricks#Extracting_sdcard.img_file_from_update_tarball|extract the SD card image]] from the beta release update tarball and then program the shiny new microSD card with the beta version of the software, as described above. You may be able to use the extracted image from the sdcard_image directory if it exists for the release you want to test.<br />
<br />
If you are brave enough to try out the beta release, you likely have good suggestions on what we can be doing better. Please share those suggestions with us at '''info@sanstreak.com''' .<br />
<br />
[[Category:Troubleshooting]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Edgertronic_camera_software_recovery&diff=5665Edgertronic camera software recovery2023-12-29T00:52:07Z<p>Tfischer: /* Get latest camera software */</p>
<hr />
<div>A bricked camera is when the software on the '''micro SD card''' that runs the camera is no longer usable. The micro SD card is the small one that you normally don't remove in the recessed slot between the big SD card and the LEDs. If you lost power during a software update, you may have bricked your camera. If you think your camera is bricked for some other reason, please send an email to '''info@edgertronic.com''' with the details of what happened right before the camera stopped functioning correctly.<br />
<br />
= Get latest camera software =<br />
<br />
Download the microSD card image file:<br />
<br />
* <span style="color:purple">'''[https://www.edgertronic.com/releases/v2.5.3rc27/sdcard_image/sdcard.v2_5_3rc27.img.zip v2.5.3rc27 SD card image]'''</span> <br />
<br />
Unzip the downloaded file to get the microSD card image file.<br />
<br />
= Removing the micro SD card =<br />
<br />
The micro SD card is in the slot next to the big SD card. The micro SD card is recessed and in a spring loaded slot. You remove the micro SD card by gently pushing the card farther into the camera (just like you do with the big SD card) and then the micro SD card will pop out. You may find a paperclip is the right size to allow you to press the micro SD card farther into the camera).<br />
<br />
Be sure to press the micro SD card straight in (meaning perpendicular to the camera back) otherwise the card may hang up on the edge of the slot.<br />
<br />
'''It is important that you insert the micro-SD card in the correct orientation. The micro SD label faces the system and camera LEDs and the gold contacts face the big SD card. Incorrectly inserting or forcing the micro SD card will cause damage to the camera that is not covered under warranty.''' <br />
<br />
= Unbrick using Windows =<br />
<br />
== Set Up ==<br />
<br />
Before we can actually write to the micro SD card from a windows machine you need to download a Windows program that can image the contents of the image file directly over the entire micro SD card. There are several such programs to choose from.<br />
<br />
== Disk Imaging ==<br />
<br />
'''Using [https://www.balena.io/etcher BalenaEtcher] (pc)'''<br />
<br />
For years, we have been using BalenaEtcher on a Mac computer to burn the micro SD cards. I recently realized that BalenaEtcher runs on Windows as well. I tried it out using the following steps and was successful on a Windows 10 computer.<br />
<br />
# [https://www.balena.io/etcher BalenaEtcher Download] and install ''Download for Windows (x86|x64)''. I tested v1.5.116.<br />
# Install a micro SD card into your PC. I pressed cancel when asked if I wanted to format the card.<br />
# Download the zip file from inside the sdcard_image directory as described above.<br />
# Run BalenaEtcher<br />
## Select ''Flash from file''. The zip file you downloaded will likely be in the cleverly named Downloads directory on your Windows PC.<br />
## Select ''Select target''. I had to show hidden files. I installed an 8GB micro SD card, so I selected the device with SD in the name and a size 7.88GB. For some reason Balena had incorrectly identified the SD card as a system drive, so I had to click ''Yes, I'm sure'. If you are uncertian, exit Balena, remove the micro SD card, and run Balena again to see which device is no long in the select target list.<br />
## Wait until Balena reports ''Flash Complete!'' before removing the micro SD card.<br />
<br />
= Unbrick using Mac O.S.=<br />
<br />
'''Using [https://www.balena.io/etcher BalenaEtcher] (mac)'''<br />
<br />
For years, we have been using BalenaEtcher on a Mac computer to burn the micro SD cards.<br />
<br />
# [https://www.balena.io/etcher BalenaEtcher Download] and install ''Download for MAC''. I tested v1.5.86.<br />
# Install a micro SD card into your PC. I pressed cancel when asked if I wanted to format the card.<br />
# Download the zip file from inside the sdcard_image directory as described above.<br />
# Run BalenaEtcher<br />
## Select ''Flash from file''. The zip file you downloaded will likely be in the cleverly named Downloads directory on your Windows PC.<br />
## Select ''Select target''. I had to show hidden files. I installed an 8GB micro SD card, so I selected the device with SD in the name and a size 7.88GB. For some reason Balena had incorrectly identified the SD card as a system drive, so I had to click ''Yes, I'm sure'. If you are uncertian, exit Balena, remove the micro SD card, and run Balena again to see which device is no long in the select target list.<br />
## Wait until Balena reports ''Flash Complete!'' before removing the micro SD card.<br />
<br />
= Unbrick using Ubuntu =<br />
<br />
* Plug your micro SD card into the Ubuntu computer using the appropriate adaptor. Find the dev name for your micro SD card using command:<br />
<br />
<pre style="background:#d6e4f1"><br />
df<br />
</pre><br />
<br />
* Unmount the file system on the micro SD card using ''N'' where N is the number of the disk taken from the above command output i.e. for example, if the dev name is /dev/sdb1, replace N=1 in the command below:<br />
<br />
<pre style="background:#d6e4f1"><br />
umount /dev/sdbN<br />
</pre><br />
<br />
* Use the '''dd''' command to completely overwrite the contents of the microSD card. In the example below, the downloaded disk image is sdcard.20151216204804.img. Please change this as appropriate if you downloaded the image to a different location. Make sure to use the correct dev name. '''Suppose if the dev name is found /dev/sdb1, then use /dev/sdb in the 'dd' command (you need to ommit 'N') in the case of Ubuntu.'''<br />
<pre style="background:#d6e4f1"><br />
FILE=sdcard.20151216204804.img<br />
sudo dd bs=64M if=~/Downloads/$FILE of=/dev/sdb<br />
</pre><br />
<br />
* Now that the 'dd' has finished, run the 'sync' command and then unplug the microsd card from the Ubuntu system.<br />
<pre style="background:#d6e4f1"><br />
sync<br />
</pre><br />
<br />
= Reinstalling micro SD card into camera =<br />
<br />
[[Image:Inserting-micro-sd-card.jpg|300px|thumb|right]]<br />
<br />
Once you have an imaged micro SD insert it back into the camera. '''It is important that you insert the micro-SD card in the correct orientation. The micro SD label faces the system and camera LEDs and the gold contacts face the big SD card. Incorrectly inserting or forcing the micro SD card will cause damage to the camera that is not covered under warranty.''' Insert the micro SD card with the camera powered off. You can use a paperclip to gently push the micro SD card into the slot. Give the camera about a minute then the LEDs should be back on and the camera should update itself. If the image you used to update the camera was an older version of software you will need to conduct a software update manually after the camera finishes the re-image process.<br />
<br />
Simply take the newest software update (or desired software version's update) file and copy it directly onto the SD card(the big one), power on the camera and wait through the [[LEDs|LED]] “white pattern” as the camera updates.<br />
<br />
If the camera still does not work, try a [[Multi-function_button#Factory_reset|factory reset]].<br />
<br />
= Trying out a beta release =<br />
<br />
We are a rather open company. We use Open Source software. As much as practical, we make the camera's source code available. We work hard supporting CAMAPI so you can integrate the camera into your existing processes. We even make our buggy beta releases available for you to try out. Only we ask this one simple request in return. Please, please, please keep your fully tested micro SD card that came with the camera intact. Go buy another quality U10 class micro SD card to use when running the beta release software. That way, if the beta release causes more problems than it solves, you can simply swap out the micro SD card with the one that came with the camera and you are back in business.<br />
<br />
To see what beta release is available, browse to the [http://www.edgertronic.com/releases/ releases directory].<br />
<br />
Since you are going to be programming that brand new microSD card, first [[SDK_-_Developer_tricks#Extracting_sdcard.img_file_from_update_tarball|extract the SD card image]] from the beta release update tarball and then program the shiny new microSD card with the beta version of the software, as described above. You may be able to use the extracted image from the sdcard_image directory if it exists for the release you want to test.<br />
<br />
If you are brave enough to try out the beta release, you likely have good suggestions on what we can be doing better. Please share those suggestions with us at '''info@sanstreak.com''' .<br />
<br />
[[Category:Troubleshooting]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Template:Known_defects&diff=5664Template:Known defects2023-12-29T00:51:04Z<p>Tfischer: </p>
<hr />
<div>== Known defects ==<br />
<br />
The following is the list of known defect in the version 2.5.x releases. <span style="color:#dd38da">[1]</span><br />
<br />
=== 20231223103452 Overlaying text and graphics causes crash for very small image heights ===<br />
<br />
If you set the vertical resolution to 96 and enable all overlays, the camera software crashes and requests a factory reset to recover.<br />
<br />
=== 20231005074253 Spurious Genlock Error reported when camera is properly genlocked ===<br />
<br />
Occasionally a Genlock Error is incorrectly reported by the camera properly configured as a genlock receiver. The captured video is correct. <br />
<br />
=== 20220415111422 Setting the DNS server IP address via CAMAPI net_set_configuration() is broken ===<br />
<br />
requested_dns_server and dns_server keys not handled properly in dictionary passed to <tt>net_set_configuration()</tt>.<br />
<br />
=== 20190302135422 SC2 SC2+ SC2X pretrigger frame count inaccurate on when the pretrigger buffer is not full and a trigger occurs ===<br />
<br />
If you trigger a SC2, SC2+, or SC2X camera before the pretrigger buffer fills, the metadata file will have an inaccurate number for the frames captured before the trigger event. This also effects review mode.<br />
<br />
=== 201412161349 Multishot Genlock buffering can get out of sync due to power cycle or selective save ===<br />
<br />
No fix defect -- there are no plans to fix this defect.<br />
<br />
If you are in the middle of a Genlocked Multishot sequence and power cycle one of the cameras, the buffering will be out of sync once the camera is powered back on. Until the cameras configured for genlock talk to each other over the network cable, there is no way to resynchronize which multishot buffer is being used.<br />
<br />
Work around: Power cycle all cameras that are configured and cabled for multishot and genlock.<br />
<br />
Similarly, if you are in the middle of a Genlocked Multishot sequence and decide to save your video set, you cannot just press the save button on the genlock source camera's GUI since only the videos on the genlock source will be saved. <br />
<br />
Work around: you must press the save button on the genlock source and all the receiver camera GUIs.<br />
<br />
=== 201409120935 Updating camera fails if there is a space ' ' character in the update tarball filename ===<br />
<br />
If you download the update tarball more than once, some operating systems put a space in the file name (e.g. " (1)" so the file being downloaded will have a unique filename. If you use the file with the space in the filename, the update will fail. To work around the defect, remove the big SD card, delete the file with a space in the filename and store the original file on the big SD card. The camera will then update correctly.<br />
<br />
=== 201409091802 Cancel trigger at the end of capture misbehaves ===<br />
<br />
No fix defect -- This is really not a defect.<br />
<br />
On occasion, if you cancel the trigger just as the post trigger capture buffer is being filled, the camera will calibrate then save the video data instead of properly handling the cancel.<br />
<br />
Work around: This is a race condition. The user thinks the camera is still capturing data when they press cancel, but in fact the camera has already switched to saving the captured video. Simply trim the video to get back to filling the pre-trigger buffer.<br />
<br />
=== 201408271324 Genlock false triggers ===<br />
<br />
No fix defect -- there are no plans to fix this defect; the hardware design doesn't support any means to fix the issue.<br />
<br />
This defect only occurs when using the [[Genlock]] feature with multiple cameras and a genlock cable.<br />
<br />
Plugging in genlock cable may trigger both genlock source and receiver cameras. Unplugging genlock cable may trigger both genlock source and receiver cameras. Powering off a genlocked camera may trigger any other connected cameras.<br />
<br />
Work around: connect all genlock cables before powering on the cameras.<br />
<br />
=== 201312111624 File timestamp is in GMT ===<br />
<br />
No fix defect -- there are no plans to fix this defect.<br />
<br />
The camera was intentionally designed to use GMT for the timezone when saving video files. Some might consider this a defect (issue #182).<br />
<br />
As of 2.4.1 you can create your own filename pattern and not use seconds since 1970 in GMT timezone.<br />
<br />
=== 201312021613 Browser forward and back buttons may change camera settings ===<br />
<br />
No fix defect -- there are no plans to fix this defect.<br />
<br />
If you browse to another site and then use the browser back button to return to viewing the camera, your camera settings may have changed.<br />
<br />
Work around: either don't browse to another web site or don't use the back button when you do; simply browse to the IP address of the camera.<br />
<br />
=== 201311041114 Playing last recorded video can fail in rare cases ===<br />
<br />
No fix defect -- there are no plans to fix this defect.<br />
<br />
The camera will automatically switch which storage device is used when the current storage device fills up and another, non-full, storage device is available. You can not play the video recorded right before the switch occurs since the active storage device has changed. <br />
<br />
Work around: You can remove the storage device and properly play the video by retrieving the video file from the non-active storage device.<br />
<br />
=== 201311101454 CAMAPI does not detect new space on mounted storage device ===<br />
<br />
No fix defect -- there are no plans to fix this defect.<br />
<br />
CAMPI handles changes in storage status using an interrupt scheme (mdev). If your SD card is full and you telnet into the camera and delete some files, no event occurs, so CAMAPI doesn't detect there is now room and the memory full message is displayed.<br />
<br />
Workaround: after deleting the files, remove and reinsert the storage device to create a change in storage status event.<br />
<br><br />
<br><br />
<span style="color:#dd38da">[1]</span> Naming convention for the 'Defect numbers' are in the format '''YYYYMMDDHHMMSS'''</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Software_releases&diff=5663Software releases2023-12-29T00:49:03Z<p>Tfischer: </p>
<hr />
<div>{{Software release version 2.5.3}}<br />
<br />
{{Known defects}}<br />
<br />
= Software release images =<br />
<br />
{{Software release images}}<br />
<br />
= Older releases =<br />
<br />
All cameras can safely downgrade to version 2.5.2, 2.5.1, 2.4.1, 2.3.1 or 2.2.1. Software versions older than 2.2.1 have been removed due to hardware changes that make older versions incompatible with some cameras.<br />
* [[Software release version 2.5.2]]<br />
** Released: April 14, 2022<br />
* [[Software release version 2.5.1]]<br />
** Released: June 8th, 2021<br />
* [[Software release version 2.4.1]]<br />
** Released: March 24th, 2020<br />
* [[Software release version 2.3.1]]<br />
** Released: Jan 4th, 2019<br />
* [[Software release version 2.2.2]]<br />
** Released: Oct 22nd, 2017<br />
* [[Software release version 2.2.1]]<br />
** Released: Feb 24th, 2017<br />
* [[Software release version 2.1]]<br />
** Released: April 4th, 2015<br />
* [[Software release version 2.0]]<br />
** Released: Sept 11th, 2014<br />
* [[Software release version 1.3]]<br />
** Released: July 30th, 2014<br />
* [[Software release version 1.2]]<br />
** Released: April 5th, 2014<br />
* [[Software release version 1.1]]<br />
** Released: Jan 10th, 2014<br />
* [[Software release version 1.0]]<br />
** Released: Dec 7th, 2013<br />
<br />
After downgrading the camera, a [[User Manual - Factory reset|factory reset]] is recommended.<br />
<br />
= Anticipated features =<br />
<br />
Let us know what features you would like to see added to the edgertronic high speed camera. Here are some requests we have received:<br />
<br />
* Pretrigger percentage greater than 100%. This allow devices, like radar, that can trigger the camera after the action has occurred to avoid saving post action frames.<br />
* Over the network software update - allow upload of an update file from the user's computer<br />
* Support [https://en.wikipedia.org/wiki/Design_rule_for_Camera_File_system Design rules for Camera File system]<br />
* Usability - when using CIFS, don't check for free space so often<br />
* Update Open Source packages<br />
* Performance tuning / SPI overhead investigation<br />
* White balance, focus aid and exposure histogram<br />
* Camera auto discovery on the network<br />
* User name and password to control who can browse to the camera<br />
* Add support for <video> tag in HTML served up by the camera<br />
* Add support for camera telling web U.I. when a status change occurs instead of using 1 second polling<br />
* User supplied gamma correction table.<br />
* Image based triggering<br />
* When one camera on the local network is triggered, have it use [[SDK_-_Multicamera_network_trigger#Camera_initiated_multi-camera_trigger|multicast network trigger]] to trigger the rest of the cameras on the same local network. Available now using [[Extending edgertronic capabilities - Extending edgertronic camera functionality|User Added URLs]] facility.<br />
* Support rotating the camera image 90 degrees so the camera can take highest frame rate capture of vertical images.<br />
* Node.js binding for CAMAPI (python and .NET binding already available)<br />
* Enhance multi-rate capture where the physical trigger button can be used to switch from post1 capture rate to post2 capture rate.<br />
<br />
= Requested features =<br />
<br />
* Allow background triggers while in review before save. Allow captures to be discarded after saved in review before save.<br />
* Having the camera POST its status (namely when a file is done encoding) to a user specified HTTP address. This may be possible using the [[Extending edgertronic capabilities - Extending edgertronic camera functionality|User Added URLs]] facility.<br />
* Sound based triggering - This will not be supported.<br />
* Halogen color temp setting - Because of the decreased use in halogen lighting this feature is less useful over time.<br />
* Multi-trigger in multi-shot capture - Trigger capturing the next video while the current video is being captured, with no frames being dropped.<br />
* Extend CAMAPI <tt>review_frame()</tt> method to return the actual frame image instead of the status<br />
* User added URL that copies videos and metadata from SD card to a network computer using FTP.<br />
* Allow genlock signaling to just trigger a camera. Imagine an environment with three genlocked cameras focused on the local event activity and a forth camera taking an overview video that also covers before and after activity. It would be nice to simply use the trigger signal from the genlock source camera.<br />
* Enable audio recording<br />
* UDP discovery <br />
* Trackman ready physical extension for the camera<br />
* A submillisecond time integration with tracking devices<br />
* Background save for selective save<br />
* URL factory_reset3 which performs a factory reset on everything but the /etc/network/interfaces network settings.<br />
* Trigger delay compensation - camera setting to compensate for the delay between when an event occurs and delay in the system that is generating the trigger. For example, a human's response time is pressing a trigger button after seeing lightning is around 400 ms. The implementation would be to increase the pre-trigger time by the trigger delay compensation, then when saving discard all the frames from the point in the capture that is trigger minus the trigger delay compensation value.<br />
* Enhance CAMAPI <tt>selective_save()</tt> method to allow specifying a frame dropping pattern as the video is being saved. The frames to be dropped would be specified via a list of (starting_frame_number, save_ratio, ratio_slope) tuples. Imagine a captured video of a baseball pitch at 700 fps. Assume the capture duration is 3 seconds, with 1.4 seconds being the wind up, 0.7 second pitch and 0.9 seconds of follow through with the playback frame rate set to 30 fps. Then the wind up could be saved at half playback speed using 60 fps (meaning of the 980 frames saved in the first 1.4 seconds at 700 fps, only save 84 frames, or dropping 11.6 frames for each frame saved), make a transition from 60 fps to 700 fps, save the pitch at 700 fps, then again transition back from 700 fps to 60 fps.<br />
<br />
[[Category: Releases]]</div>Tfischerhttps://edgertronic.mywikis.wiki/w/index.php?title=Captured_video_queue_control&diff=5662Captured video queue control2023-12-29T00:45:54Z<p>Tfischer: </p>
<hr />
<div>= Background =<br />
<br />
Internal to the camera is a queue that holds the captured videos. When you trigger the camera, a video is captured and added to the queue. The camera will either automatically save the video (when save mode is set to auto, background-fifo, or background-lifo) or the camera will let you review the video and save the section you care about (save mode set to review-before-save).<br />
<br />
A new manual save mode has been added to allow an external device to have direct control over the queue that holds captured videos. Captured videos are still added to the queue in the same way - meaning when the camera is triggered. The difference is in manual save mode the controlling external device has to specify which captured video to save using the CAMAPI selective_save() method. The controlling external device uses CAMAPI delete_captured_videos() method to remove captured videos from the capture queue when they are no longer needed.<br />
<br />
The manual save mode is similar to review-before-save save mode with these key differences:<br />
<br />
* There is no webUI support for manual save mode. You have to program an external controlling device to use manual save mode.<br />
* In manual save mode, the save is in the background, so the camera can continue to be triggered to capture new videos while the camera is saving a previously captured video.<br />
* In review-before-save save mode, the entire captured video queue is emptied by calling CAMAPI run() method.<br />
* In manual save mode, you can selectively free up room in DDR3 for more captures by using CAMAPI delete_captured_videos() method.<br />
<br />
= Client control over camera's captured video queue =<br />
<br />
When the camera is triggered, a video is captured to the DDR3 memory and added to the captured video queue. Once the DDR3 memory is full of captured videos, the trigger is disabled. The captured videos remain in the queue (and thus in DDR3 memory) until the captured video is discarded (or the camera loses power).<br />
<br />
When the camera's save mode is configured for '''auto''', '''background FIFO''', or '''background LIFO''', the camera is in control, automatically saving and discarding captured videos from the queue. Camera control of the captured video queue means once the camera settings are configured, the user simply needs to trigger the camera, and the camera takes care of the rest.<br />
<br />
The camera also supports a '''review before save''' configuration where the user (via the web user interface) or a client application controls the encoding parameters, starting and ending frames to save, and the order the captured videos are saved. When configured for '''review before save''', there is no support for background save and all the captured videos are discarded when the CAMPAPI <tt>run()</tt> method is invoked making the camera ready to capture videos.<br />
<br />
The new '''manual''' save mode supports background save so new videos can be captured while a previously captured video is being saved. Manual save mode means a user-supplied computer application controls the camera's captured video queue. '''Manual''' save mode is not available via the camera's web user interface.<br />
<br />
== Queue control ==<br />
<br />
The camera supports the '''manual''' save mode allowing a software client application control over how the captured videos are processed. A captured video can be:<br />
<br />
* Saved, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, specifying the starting and end frame to save, using CAMAPI <tt>selective_save()</tt> method.<br />
* Saved, first changing some of the encoding parameters using the CAMAPI <tt>configure_save()</tt> method.<br />
* Saved multiple times, changing encoding parameters and/or starting and ending frames to save.<br />
* Deleted, using the new CAMAPI <tt>delete_captured_videos()</tt> method.<br />
<br />
== Unfortunate terminology ==<br />
<br />
To maintain compatibility with existing software that controls an edgertronic camera, some dictionary key names are used that don't reflect their actual meaning.<br />
<br />
* <tt>get_captured_video_info()</tt> key: '''unsaved_count''': the actual meaning is the number of videos in the captured video queue.<br />
* <tt>get_captured_video_info()</tt> key: '''first_buffer''': has no meaning when save mode is ''manual''. Instead you have to walk the keys in the <tt>get_captured_video_info()</tt> returned dictionary and check for the value of the key having a dictionary data type. Sorry about that. The key will also be an integer (but the [https://stackoverflow.com/questions/1450957/pythons-json-module-converts-int-dictionary-keys-to-strings python JSON library forces it to be a string]).<br />
* In [[User Manual - Filenaming#File naming parameters|filename parameter expansion]], '''&b''' is referred to as the '''multishot buffer''': the actual meaning is the capture number.<br />
* <tt></tt>, <tt></tt>, <tt></tt> key: '''buffer_number''':<br />
<br />
== Identifying queued videos ==<br />
<br />
The CAMAPI <tt>get_captured_video_info()</tt> method provide information about all the captured video in the queue. The <tt>get_captured_video_info()</tt> response has been extended to include two new dictionary keys:<br />
<br />
* user_parm - a string that can be provided with the CAMAPI <tt>trigger()</tt> method.<br />
* target_filename - the expanded base filename, without any directory paths. The actual filename could be different if specified via the CAMAPI <tt>selective_save()</tt> method.<br />
<br />
The ''buffer_number'' is used with the CAMAPI <tt>selective_save()</tt> and <tt>delete_captured_videos()</tt> methods. Note that ''buffer_number'' is used for historic reasons with capture ID number being a more descriptive name. When CAMAPI <tt>run()</tt> is called the ''buffer_number'' is set to 0 and incremented by one each time the camera is triggered.<br />
<br />
= Manual save interaction with other CAMAPI methods =<br />
<br />
When the camera is configured for manual save mode, there are interactions with varous CAMAPI methods the developer should keep in mind.<br />
<br />
{| border=1<br />
! CAMAPI Method !! Parameter !! Interaction<br />
|-<br />
| run() || ''all'' || All videos in the video capture queue are deleted. ''buffer_number'' is reset to 1 for the next capture. Must be called when the camera is not currently saving a video.<br />
|-<br />
| run() || key: ''filename_pattern'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| run() || key: ''save_mode'' || Must be set to <tt>SAVE_MODE_MANUAL</tt>.<br />
|-<br />
| trigger() || key: ''base_filename'' || May effect <tt>get_captured_video_info()</tt> key ''target_filename''. See filenaming section for details.<br />
|-<br />
| trigger() || key: ''user_parm'' || Returned by get_captured_video_info() as part of captured video information.<br />
|-<br />
| selective_save() || key: ''buffer_number'' || Used to identify the video in the captured video queue to save. The ''buffer_namer'' is the same as returned by get_captured_video_info().<br />
|-<br />
| selective_save() || keys: ''start_frame''<br>''end_frame''|| Allows a subset of the captured frames for a specific video in the captured video queue to be saved.<br />
|-<br />
| selective_save() || key: ''filename'' || Highest priority filename pattern. Overrides default filename and any filename set via the CAMAPI methods run() reconfigure_run(), or trigger().<br />
|-<br />
| save_stop() || ''all'' || Not supported in manual save mode.<br />
|-<br />
| get_captured_video_info() || ''all'' || The buffer number is used as a dictionary key to index the captured video on intered. The keys ''user_parm'' and ''target_filename'' enable associating a specific trigger() invocation with a resulting captured video in the queue.<br />
|-<br />
| delete_captured_videos() || key: ''delete_list'' || Deletes videos from the captured video queue using the buffer index. The buffer indices can be extracted from the dictionary returned by get_captured_video_info().<br />
|-<br />
| delete_captured_videos() || key: ''delete_all'' || Deletes all the videos from the captured video queue. <br />
|-<br />
| get_camstatus() || key: ''unsaved_frame_count'' || Only applies to the current video being saved, not to the other unsaved videos in the queue.<br />
|-<br />
| get_camstatus() || ||<br />
|-<br />
| get_camstatus() || ||<br />
|}<br />
<br />
= Live view =<br />
<br />
By using manual save instead of background save, you are able to see live view in between video saves. This can be useful in cases like capturing baseball pitches, when there always seems to be captured videos, with the camera catching up at the half inning.<br />
<br />
= Example manual save mode usage =<br />
<br />
For this example, assume a radar is triggering the edgertronic camera. The delay from the event until the trigger occurs is 700 ms. Further assume you want 100ms capture before the event and 100 ms capture after the event. Since the 700ms latency is larger than the 100ms post-event you want to capture, there will be 600ms of video after the duration of interest.<br />
<br />
== Camera settings ==<br />
<br />
Key camera settings:<br />
* Frame rate: 1000fps<br />
* Pre-trigger buffer: 800ms (100ms of pre-trigger plus the 700ms latency)<br />
* Post-trigger buffer: 0ms<br />
<br />
The first frame captured is 800ms * 1000 fps = -800. Remember all pre-trigger frames have a negative value. The last frame captured is 0, the trigger frame.<br />
<br />
From the above, we can calculate the frames of interest (those frames 100 ms before the event and 100ms after the event). <br />
* First frame: -800, 100ms before the event, which is 800ms before the trigger. <br />
* Last frame: -600, wanting 200ms of video captured at 1000fps, that is a total of 200 frames.<br />
<br />
Configure the camera by invoking the CAMAPI run() method passing in your requested settings.<br />
<br />
== Triggering ==<br />
<br />
Triggering can be done either using a radar detector contact closure connected to the camera's remote trigger input or using CAMAPI trigger() method. If you use the trigger() method, consider passing in a user parameter (perhaps the detected speed of the ball) and/or filename.<br />
<br />
== Camera monitoring ==<br />
<br />
At this point, the only way to monitor changes in camera state is by polling the camera using the CAMAPI get_camstatus() method. The controlling external device should be monitoring the camera for changes in any of the following:<br />
<br />
* Camera capture state<br />
* Camera save state<br />
* Camera status<br />
<br />
In python, it could be something like:<br />
<pre><br />
import hcamapi, time<br />
cam = hcamapi.HCamapi("10.11.12.13")<br />
<br />
status = cam.get_camstatus()<br />
<br />
while True:<br />
time.sleep(1)<br />
new_status = cam.get_camstatus()<br />
if new_status.get('active_buffer') > status.get('active_buffer'):<br />
process_capture_complete(cam, new_status)<br />
if :<br />
process_save_complete(cam, new_status)<br />
if :<br />
process_camera_status_change(cam, new_status)<br />
status = new_status<br />
</pre><br />
<br />
=== Processing capture state changes ===<br />
<br />
Once the CAMAPI run() method is invoked, the camera increments the active_buffer (really the capture count), the state switches to filling-pre-trigger-buffer and the camera starts filling the pre-trigger buffer with video frames. If no trigger occurs when the pre-trigger buffer is full, the camera switches to pre-trigger-buffer-full state and the camera continues capturing video frames overwriting each time overwriting the oldest frame in the pre-trigger buffer.<br />
<br />
Once a trigger occurs, the camera switches to the filling-post-trigger-buffer and the camera write frames to the post-trigger buffer until the buffer is full. After the buffer is full several things happen:<br />
* All information about the just captured video is recorded and associated with the new entry in the captured video queue. <br />
* The camera locates an empty buffer in DDR3. If one is not available, the camera stops the capture and switches to the buffers-full-trigger-disabled state.<br />
* The camera state switches to filling-pre-trigger-buffer and the camera starts storing videos frames in the pre-trigger buffer in the DDR3 buffer that was previously empty. <br />
* The CAMAPI get_camstatus() method returned dictionary unsaved_count entry is incremented.<br />
* A new entry is added to the CAMAPI get_captured_video_info() returned dictionary (and the unsaved_count in that returned dictionary is incremented as well).<br />
<br />
Processing a new capture consists of monitoring a change in get_camstatus() active_buffer (really capture count), and once detected, the CAMAPI get_captured_video_info() method should be invoked to keep the external controlling device's cached list of available captured videos up to date.<br />
<br />
In python, it could be something like:<br />
<pre><br />
def process_capture_complete(cam, status):<br />
global vidque # dictionary keys are trigger times as integers<br />
vids = cam.get_captured_video_info()<br />
vidque_modified = False<br />
for key in vids.keys():<br />
vid = vids.get(key)<br />
if type(vid) is dict:<br />
tt = int(vid.get('trigger_time'))<br />
if vidque.get(tt) == None:<br />
vidque_modified = True<br />
vidque[tt] = vid<br />
if vidque_modifed:<br />
controller_handle_new_videos()<br />
</pre><br />
<br />
=== Processing save state changes ===<br />
<br />
In manual save mode, saving a video file is initiated by invoking the CAMAPI selective_save() method. Any segment in any buffer in the captured video queue can be saved to a video file. Once a save is in progress, the camera will indicate a save has completed<br />
<br />
It is also possible for a save to be interrupted, due to storage full condition. In this case, the camera will: <br />
<br />
In python, it would be something like:<br />
<pre><br />
def process_save_complete(cam, status):<br />
<br />
</pre><br />
<br />
=== Processing status changes ===<br />
<br />
In python, it would be something like:<br />
<pre><br />
process_camera_status_change(cam, status):<br />
<br />
</pre><br />
<br />
= Random implementation notes =<br />
<br />
# At some point the camera may support a WebSocket to notify clients when a change in camera state occurs by sending CAMAPI <tt>get_camstatus()</tt> responses when the camera's state changes. The name of the file just saved could also be included in the camera status information. This would save the controlling external device from having to poll CAMAPI get_camstatus() to detect was a capture just finished or a save just finished. I mention this now as the camera control software is implemented to support asynchronous notifications, but the version of the lighttpd web server doesn't support WebSockets.<br />
# The idea of implementing a queue of requested saves was considered and discarded. The client controlling the camera needs to monitor when a save completes and control what occurs next.</div>Tfischer