I developed a screen applying SwiftUI + TCA in an existing UIKit project.
One major takeaway from using SwiftUI is its many conveniences. The separation of UI components and the reduced number of lines of code compared to UIKit enhance readability. I also noticed that utilizing ChatGPT and the speed of Preview significantly shortened the UI development time.
When handling API requests with TCA, I initially thought that having multiple Actions (e.g., request, response) might complicate things, but I see it as a measure to maintain unidirectional data flow within the architecture.
There are various tools for extracting text from images (Google ML Kit, NHNCloud OCR, Vision).
Among them, I decided to use the Vision framework, which is provided by Apple, for text extraction.
There’s a similarly named framework, VisionKit, which can be used together to recognize areas using VNDocumentCameraViewController
and extract images for text extraction via the Vision framework.
However, it seems there isn’t a way to customize the camera scan screen directly with VNDocumentCameraViewController
.
So, I used the AVFoundation framework to capture images and provided them to the Vision framework, but the recognition rate was significantly lower.
I tested the 🔗 YesWeScan library, which I used frequently before VisionKit was available, and found that its recognition rate was comparable to that of VisionKit.
I plan to analyze the library to identify differences in the configuration when using AVFoundation, which I initially set up.
To track the viewing status of useful YouTube channel videos related to development, I wrote a Python script that organizes playlists into a text file using the YouTube Data API.
import requests
import time
# Step 1: API settings
API_KEY = '' # Enter your actual API key here
CHANNEL_ID = '' # Enter the channel ID from which to fetch titles
MAX_RESULTS = 50 # Maximum number of results to retrieve at once (up to 50)
# YouTube Data API v3 search endpoint base URL
youtube_url = 'https://www.googleapis.com/youtube/v3/search'
# Parameters needed for the API request
params = {
'part': 'snippet',
'maxResults': MAX_RESULTS,
'channelId': CHANNEL_ID,
'type': 'video',
'order': 'date', # Sort results by date
'key': API_KEY
}
# Function to fetch video titles and URLs
def fetch_video_titles_and_urls():
videosResult = []
next_page_token = None
while True:
if next_page_token:
params['pageToken'] = next_page_token
# Step 2: Send GET request to YouTube Data API
response = requests.get(youtube_url, params=params)
data = response.json()
# Step 3: Extract video titles and URLs
for item in data['items']:
title = item['snippet']['title']
video_id = item['id']['videoId']
video_url = f'https://www.youtube.com/watch?v={video_id}'
videosResult.append((title, video_url))
# Check if there is a next page
next_page_token = data.get('nextPageToken')
# Wait for 61 seconds if there is a next page
if next_page_token:
time.sleep(61)
if not next_page_token:
break
return videosResult
# Function to save titles and URLs to a text file
def save_titles_and_urls_to_file(videos, filename='video_titles_and_urls.txt'):
with open(filename, 'w', encoding='utf-8') as file:
for index, (title, url) in enumerate(videos, start=1):
file.write(f"{index}. {title}\n- {url}\n\n")
# Execute the code
videos = fetch_video_titles_and_urls()
save_titles_and_urls_to_file(videos)
video_titles_and_urls.txt
file if there is a next page.queryItem
in URLComponents
, it automatically performs UTF-8 URL encoding, so handling it separately may result in double encoding. func requestSearch(keyword: String) async throws -> Result {
...
/* Duplicate
guard let encodedKeyword = keyword.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed) else {
return .failure(.invalidUrl)
}
*/
// Automatically performs URL encoding when added to the query
components?.queryItems = [
URLQueryItem(name: "searchKeyword", value: keyword)
]
...