fastface.utils

fastface.utils.box

fastface.utils.box.jaccard_vectorized(box_a: Tensor, box_b: Tensor) Tensor

Calculates jaccard index with a vectorized fashion

Parameters:
  • box_a (torch.Tensor) – torch.Tensor(A,4) as xmin,ymin,xmax,ymax

  • box_b (torch.Tensor) – torch.Tensor(B,4) as xmin,ymin,xmax,ymax

Returns:

IoUs as torch.Tensor(A,B)

Return type:

torch.Tensor

fastface.utils.box.intersect(box_a: Tensor, box_b: Tensor) Tensor

Calculates intersection area of boxes given :param box_a: torch.Tensor(A,4) as xmin,ymin,xmax,ymax :type box_a: torch.Tensor :param box_b: torch.Tensor(B,4) as xmin,ymin,xmax,ymax :type box_b: torch.Tensor

Returns:

torch.Tensor(A,B)

Return type:

torch.Tensor

fastface.utils.box.cxcywh2xyxy(boxes: Tensor) Tensor

Convert box coordiates, centerx centery width height to xmin ymin xmax ymax

Parameters:

boxes (torch.Tensor) – torch.Tensor(N,4) as centerx centery width height

Returns:

torch.Tensor(N,4) as xmin ymin xmax ymax

Return type:

torch.Tensor

fastface.utils.box.xyxy2cxcywh(boxes: Tensor) Tensor

Convert box coordiates, xmin ymin xmax ymax to centerx centery width height

Parameters:

boxes (torch.Tensor) – torch.Tensor(N,4) as xmin ymin xmax ymax

Returns:

torch.Tensor(N,4) as centerx centery width height

Return type:

torch.Tensor

fastface.utils.box.batched_nms()

Applies batched non max suppression to given boxes

Parameters:
  • boxes (torch.Tensor) – torch.Tensor(N,4) as xmin ymin xmax ymax

  • scores (torch.Tensor) – torch.Tensor(N,) as score

  • batch_ids (torch.Tensor) – torch.LongTensor(N,) as batch idx

  • iou_threshold (float, optional) – nms threshold. Defaults to 0.4.

Returns:

keep mask

Return type:

torch.Tensor

fastface.utils.preprocess

fastface.utils.preprocess.prepare_batch(batch: List[Tensor], target_size: int, adaptive_batch: bool = False) Tuple[Tensor, Tensor, Tensor]

Convert list of tensors to tensors

Parameters:
  • batch (List[torch.Tensor]) – list of tensors(float) as (C x H x W)

  • target_size (int) – maximum dimension size to fit

  • adaptive_batch (bool, optional) – if true than batching will be adaptive, using max dimension of the batch, otherwise it will use target_size, Default: True

Returns:

  1. : batched inputs as B x C x target_size x target_size

  2. : applied scale factors for each image as torch.FloatTensor(B,)

  3. : applied padding for each image as torch.LongTensor(B,4) pad left, top, right, bottom

Return type:

Tuple[torch.Tensor, torch.Tensor, torch.Tensor]

fastface.utils.preprocess.adjust_results(preds: List[Tensor], scales: Tensor, paddings: Tensor) Tensor

Re-adjust predictions using scales and paddings

Parameters:
  • preds (List[torch.Tensor]) – list of torch.Tensor(N, 5) as xmin, ymin, xmax, ymax, score

  • scales (torch.Tensor) – torch.Tensor(B,)

  • paddings (torch.Tensor) – torch.Tensor(B,4) as pad_left, pad_top, pad_right, pad_bottom

Returns:

torch.Tensor(B, N, 5) as xmin, ymin, xmax, ymax, score

Return type:

torch.Tensor

fastface.utils.vis

fastface.utils.vis.render_predictions(img: ndarray, preds: Dict, color: Tuple[int, int, int] | None = None) <module 'PIL.Image' from '/home/docs/checkouts/readthedocs.org/user_builds/fastface/envs/latest/lib/python3.8/site-packages/PIL/Image.py'>

Returns Rendered PIL Image using given predictions :param img: 3 channeled image :type img: np.ndarray :param preds: predictions as {‘boxes’:[[x1,y1,x2,y2], …], ‘scores’:[<float>, ..]} :type preds: Dict :param color: color of the boundaries. if None that it will be random color. :type color: Tuple[int,int,int], optional

Returns:

3 channeled pil image

Return type:

Image