英文
大頭照

ABI
@OpenGesture

OpenGesture skill seeks to simplify the process of learning and understand the sign language .

OpenGesture 的目標是每週收到 ZAR150.00 的捐款。
捐助   PayPal

描述

The OpenGesture Skill uses a model built upon an Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modelling hand motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified hand motion features, which simplifies the task of prediction.

Alexa handles the Speech Recognition using a custom built skill Speech-To-Sign Language translation which recognises the words being spoken, regardless of whom the speaker is. The OpenGesture skill for Alexa will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format.

已連結的帳號

OpenGesture 在其他平臺擁有以下帳號:

記錄

OpenGesture 於 5 年前加入。

每週收入(南非蘭特)

每週贊助人的數目