MediaPipe로 구현한 손으로 그림 그리기Deep Learning & Machine Learning/MediaPipe2023. 10. 18. 21:29
Table of Contents
반응형
MediaPipe를 사용하여 손으로 그림 그리는 것을 구현해보았습니다.
2023. 3 .2 최초작성
유튜브 영상 https://youtu.be/S_Op3Qy4e8Q?feature=shared
# 원본 코드 https://google.github.io/mediapipe/solutions/hands.html """The 21 hand landmarks.""" # 손가락 위치 정의 참고 https://google.github.io/mediapipe/images/mobile/hand_landmarks.png # # WRIST = 0 # THUMB_CMC = 1 # THUMB_MCP = 2 # THUMB_IP = 3 # THUMB_TIP = 4 엄지 # INDEX_FINGER_MCP = 5 # INDEX_FINGER_PIP = 6 # INDEX_FINGER_DIP = 7 # INDEX_FINGER_TIP = 8 검지 # MIDDLE_FINGER_MCP = 9 # MIDDLE_FINGER_PIP = 10 # MIDDLE_FINGER_DIP = 11 # MIDDLE_FINGER_TIP = 12 중지 # RING_FINGER_MCP = 13 # RING_FINGER_PIP = 14 # RING_FINGER_DIP = 15 # RING_FINGER_TIP = 16 약지 # PINKY_MCP = 17 # PINKY_PIP = 18 # PINKY_DIP = 19 # PINKY_TIP = 20 새끼 # 필요한 라이브러리 # pip install opencv-python mediapipe pillow numpy import cv2 import mediapipe as mp from PIL import ImageFont, ImageDraw, Image import numpy as np def draw_ball_location(image, locations): for i in range(len(locations)-1): if locations[0] is None or locations[1] is None: continue cv2.line(image, tuple(locations[i]), tuple(locations[i+1]), (0, 255, 255), 3) return image list_ball_location = [] history_ball_locations = [] mp_drawing = mp.solutions.drawing_utils mp_hands = mp.solutions.hands mp_drawing_styles = mp.solutions.drawing_styles cap = cv2.VideoCapture(0) text = "" with mp_hands.Hands( min_detection_confidence=0.5, min_tracking_confidence=0.5) as hands: while cap.isOpened(): success, image = cap.read() if not success: print("Ignoring empty camera frame.") continue # Flip the image horizontally for a later selfie-view display, and convert # the BGR image to RGB. image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB) # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False results = hands.process(image) # Draw the hand annotations on the image. image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) image_height, image_width, _ = image.shape if results.multi_hand_landmarks: for hand_landmarks in results.multi_hand_landmarks: # 엄지를 제외한 나머지 4개 손가락의 마디 위치 관계를 확인하여 플래그 변수를 설정합니다. 손가락을 일자로 편 상태인지 확인합니다. thumb_finger_state = 0 if hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_CMC].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_MCP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_MCP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_IP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_IP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_TIP].y * image_height: thumb_finger_state = 1 index_finger_state = 0 if hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_MCP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_PIP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_PIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_DIP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_DIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y * image_height: index_finger_state = 1 middle_finger_state = 0 if hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_MCP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_PIP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_PIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_DIP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_DIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_TIP].y * image_height: middle_finger_state = 1 ring_finger_state = 0 if hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_MCP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_PIP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_PIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_DIP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_DIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_TIP].y * image_height: ring_finger_state = 1 pinky_finger_state = 0 if hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_MCP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_PIP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_PIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_DIP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_DIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_TIP].y * image_height: pinky_finger_state = 1 use_pen = False if hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_TIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_DIP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_TIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_DIP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_TIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_DIP].y * image_height: if hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_TIP].y * image_height > hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_DIP].y * image_height: if index_finger_state: use_pen = True use_eraser = False if thumb_finger_state == 1 and index_finger_state == 1 and middle_finger_state == 1 and ring_finger_state == 1 and pinky_finger_state == 1: use_eraser = True use_move = False if index_finger_state == 0 and middle_finger_state == 0 and ring_finger_state == 0 and pinky_finger_state == 0: use_move = True image = Image.fromarray(image) draw = ImageDraw.Draw(image) text = "" if use_pen == True: text = "펜" elif use_eraser == True: text = "지우개" elif use_move == True: text = "이동" if text == '펜': x = int(hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].x * image_width) y = int(hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y * image_height) list_ball_location.append((x,y)) elif text == '이동': history_ball_locations.append(list_ball_location.copy()) list_ball_location.clear() elif text == '지우개': history_ball_locations.clear() list_ball_location.clear() font = ImageFont.truetype("fonts/gulim.ttc", 80) w, h = font.getsize(text) x = 50 y = 50 draw.rectangle((x, y, x + w, y + h), fill='black') draw.text((x, y), text, font=font, fill=(255, 255, 255)) image = np.array(image) # 손가락 뼈대를 그려줍니다. mp_drawing.draw_landmarks( image, hand_landmarks, mp_hands.HAND_CONNECTIONS, mp_drawing_styles.get_default_hand_landmarks_style(), mp_drawing_styles.get_default_hand_connections_style()) image = draw_ball_location(image, list_ball_location) for ball_locations in history_ball_locations: image = draw_ball_location(image, ball_locations) cv2.imshow('MediaPipe Hands', image) if cv2.waitKey(5) & 0xFF == 27: break cap.release() |
반응형
'Deep Learning & Machine Learning > MediaPipe' 카테고리의 다른 글
가위바위보를 인식하는 Mediapipe hand Python 예제 (0) | 2023.10.17 |
---|---|
mediapipe로 만든 핸드 마우스 (0) | 2023.10.16 |
Mediapipe의 Hello_World 예제 사용 방법 (0) | 2021.07.24 |
MediaPipe를 Ubuntu에 설치하는 방법 (45) | 2021.07.15 |
시간날때마다 틈틈이 이것저것 해보며 블로그에 글을 남깁니다.
블로그의 문서는 종종 최신 버전으로 업데이트됩니다.
여유 시간이 날때 진행하는 거라 언제 진행될지는 알 수 없습니다.
영화,책, 생각등을 올리는 블로그도 운영하고 있습니다.
https://freewriting2024.tistory.com
제가 쓴 책도 한번 검토해보세요 ^^
@webnautes :: 멈춤보단 천천히라도
그렇게 천천히 걸으면서도 그렇게 빨리 앞으로 나갈 수 있다는 건.
포스팅이 좋았다면 "좋아요❤️" 또는 "구독👍🏻" 해주세요!