XiWind 西風之劍
HomeTechProContactGitHub
  • About
  • Git
    • Windows Terminal、PowerShell 安裝
    • Git 開始使用
    • Branch 入門
    • 合併多個 Commit , 編輯
    • 額外功能
  • deep learning
    • Dilated Convolution
  • Python
    • GIL 【全域直譯器鎖】
    • PyPy 【JIT 編譯器】
    • Decorator 【修飾器】
      • Class Decorators
  • Python library
    • abc 【抽象 Class】
      • ABC, ABCMeta
      • __abstractmethods__, get_cache_token, update_abstractmethods
    • dataclasses 【數據 Class】
      • make_dataclass(), replace(), is_dataclass(), __post_init__
    • enum 【列舉 Class】
      • Flag, auto(), unique, verify()
      • 範例
    • concurrent.futures 【執行緒、程序】
      • Future, Module Functions
    • queue 【佇列、同步】
      • full(), empty(), qsize(), join(), task_done()
    • functools 【可調用物件】
      • ordering、wrapper、partial
      • Overloading
    • heapq 【堆積佇列】
      • heapify(), merge(), nlargest(), nsmallest()
    • time 【時間】
      • time(), monotonic(), perf_counter()...
      • sleep(), 範例...
    • logging 【日誌】
Powered by GitBook
On this page
  • 範例
  • 範例 – 使用 Enums 進行比較
  • 範例 – 影像處理
  • 範例 – 深度學習
  • 範例 – 應用
  • 參考資料

Was this helpful?

  1. Python library
  2. enum 【列舉 Class】

範例


範例 – 使用 Enums 進行比較

Enums 可以進行比較。例如:is, ==...。

當您比較 enum members 時,最好透過它們的 identity來比較它們(使用 is),而不是通過它們的值來比較它們(使用 == )。 這是因為不同的 enum members "可能"具有相同的值。

PYTHON
from enum import Enum

class Size(Enum):
    S = 1
    M = 2
    L = 3
    XL = 4

print(Size.S.value < Size.M.value)    # Output: True
print(Size.L.value > Size.XL.value)   # Output: False
print(Size.L.value is Size.XL.value)  # Output: False

範例 – 影像處理


PYTHON
import cv2
from enum import Enum

class Operation(Enum):
    GRAYSCALE = 1
    BLUR = 2
    EDGE_DETECTION = 3

def process_image(img_path, operation):
    img = cv2.imread(img_path)

    if operation == Operation.GRAYSCALE:
        processed_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    elif operation == Operation.BLUR:
        processed_img = cv2.GaussianBlur(img, (15, 15), 0)
    elif operation == Operation.EDGE_DETECTION:
        processed_img = cv2.Canny(img, 100, 200)

    cv2.imshow('Processed Image', processed_img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

# Usage:
process_image('path_to_image.jpg', Operation.GRAYSCALE)

範例 – 深度學習


建立不同類型的深度學習模型:

PYTHON
import torch
from torchvision import models
from enum import Enum

class ModelType(Enum):
    RESNET = 1
    ALEXNET = 2
    VGG = 3

def get_model(model_type):
    if model_type == ModelType.RESNET:
        model = models.resnet50(weights='DEFAULT')
    elif model_type == ModelType.ALEXNET:
        model = models.alexnet(weights='DEFAULT')
    elif model_type == ModelType.VGG:
        model = models.vgg16(weights='DEFAULT')

    return model

# Usage:
model = get_model(ModelType.RESNET)

建立不同類型的損失函數:

PYTHON
import torch.nn as nn
from enum import Enum

class LossType(Enum):
    CROSS_ENTROPY = 1
    MSE = 2
    NLL = 3

def get_loss(loss_type):
    if loss_type == LossType.CROSS_ENTROPY:
        loss_function = nn.CrossEntropyLoss()
    elif loss_type == LossType.MSE:
        loss_function = nn.MSELoss()
    elif loss_type == LossType.NLL:
        loss_function = nn.NLLLoss()

    return loss_function

# Usage:
loss_function = get_loss(LossType.CROSS_ENTROPY)

範例 – 應用


在此範例中,使用 enum 的好處變得更加明顯。 您可以使用 enums 擁有統一的介面 (interface),而不是使用多個方法或 string 參數來指定 operation 和輸出。 這將使您的程式碼更易於維護並降低錯誤風險。

此外,透過將這些相關常數分組為 enums Class,您可以在程序中傳遞它們並在各種 contexts 中使用它們,因為您知道它們只能具有一組定義的值。

使用 @property 和 @setter 的主要優點是能夠在設置 enum 時加入驗證。 在這種情況下,我們確保提供的值是適當的 enum 實例。 如果不是,則會引發 ValueError。 這可以使您的代碼更加 robust,並防止出現難以追蹤的錯誤。

PYTHON
import cv2
import torch
import numpy as np
from enum import Enum
from torchvision import models
from torchvision import transforms

class ImageOperation(Enum):
    GRAYSCALE = 1
    BLUR = 2
    EDGE_DETECTION = 3
    DEEP_LEARNING = 4

class OutputFormat(Enum):
    JPG = 1
    PNG = 2
    TIFF = 3

class ImageProcessor:
    def __init__(self, operation, output_format):
        self._operation = ImageOperation(operation)
        self._output_format = output_format if output_format is None else OutputFormat(output_format)
        self.model = models.resnet50(weights='DEFAULT')

    def process_image(self, img_path):
        img = cv2.imread(img_path)

        if self.operation == ImageOperation.GRAYSCALE:
            processed_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        elif self.operation == ImageOperation.BLUR:
            processed_img = cv2.GaussianBlur(img, (15, 15), 0)
        elif self.operation == ImageOperation.EDGE_DETECTION:
            processed_img = cv2.Canny(img, 100, 200)
        elif self.operation == ImageOperation.DEEP_LEARNING:
            # Preprocess and run through model (assumes model is some kind of image classifier)
            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)  # Convert from BGR to RGB
            img_tensor = self.preprocess(img)
            output = self.model(img_tensor.unsqueeze(0))
            confidence_score, predicted = torch.max(output, 1)
            processed_img = self.visualize(img, confidence_score, predicted)

        # Save image in desired format
        if self.output_format is None:
            return processed_img
        elif self.output_format == OutputFormat.JPG:
            cv2.imwrite('output.jpg', processed_img, [cv2.IMWRITE_JPEG_QUALITY, 100])
        elif self.output_format == OutputFormat.PNG:
            cv2.imwrite('output.png', processed_img)
        elif self.output_format == OutputFormat.TIFF:
            cv2.imwrite('output.tiff', processed_img)

    def preprocess(self, img):
        # Define the transformations: resize -> to tensor -> normalize
        transform = transforms.Compose([
            transforms.ToPILImage(),
            transforms.Resize((224, 224)),  # Most pretrained models expect 224x224 images
            transforms.ToTensor(),
            transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])  # Values from ImageNet
        ])
    
        # Apply the transformations
        img_tensor = transform(img)
        return img_tensor


    def visualize(self, img, confidence_score, predicted):
        # Placeholder for your visualization method based on the model's output
        print(f"imagenet labels id:{predicted.item()}, confidence_score:{confidence_score.item():.2f}")
        return img

    @property
    def operation(self):
        return self._operation

    @operation.setter
    def operation(self, operation):
        if not isinstance(operation, ImageOperation):
            raise ValueError("operation must be an instance of ImageOperation Enum.")
        self._operation = operation

    @property
    def output_format(self):
        return self._output_format

    @output_format.setter
    def output_format(self, output_format):
        if output_format is not None and not isinstance(output_format, OutputFormat):
            raise ValueError("output_format must be an instance of OutputFormat Enum or None.")
        self._output_format = output_format

# Usage:
processor = ImageProcessor(ImageOperation.DEEP_LEARNING, None)
processor.process_image('n02085782_2.jpg')

# Change operation and output format
processor.operation = ImageOperation.GRAYSCALE
processor.output_format = OutputFormat.PNG
processor.process_image('n02085782_2.jpg')

執行結果:

imagenet labels id:152, confidence_score:0.89

參考資料


PreviousFlag, auto(), unique, verify()Nextconcurrent.futures 【執行緒、程序】

Last updated 1 year ago

Was this helpful?

Imagenet labels id 解碼詳見:

imagenet 1000 class idx to human readable labels
enum — Support for enumerations — Python 3.11.4 documentation
Enum HOWTO — Python 3.11.4 documentation
輸入
輸出
Page cover image