乐闻世界logo
搜索文章和话题

面试题手册

如何在 Jest 中测试 React 组件?常用的测试工具和查询方法有哪些?

在 Jest 中测试 React 组件需要结合测试工具和渲染方法:常用测试工具:@testing-library/react:官方推荐的 React 测试库react-test-renderer:用于快照测试enzyme:传统的 React 组件测试工具(较少使用)基本测试示例:import { render, screen, fireEvent } from '@testing-library/react';import Button from './Button';test('renders button with text', () => { render(<Button>Click me</Button>); expect(screen.getByText('Click me')).toBeInTheDocument();});test('calls onClick when clicked', () => { const handleClick = jest.fn(); render(<Button onClick={handleClick}>Click me</Button>); fireEvent.click(screen.getByText('Click me')); expect(handleClick).toHaveBeenCalledTimes(1);});常用查询方法:getByText():通过文本查找元素getByRole():通过角色查找元素getByTestId():通过 data-testid 属性查找queryByText():查找元素,不存在时返回 nullfindByText():异步查找元素测试异步组件:test('loads and displays data', async () => { render(<UserList />); expect(screen.getByText('Loading...')).toBeInTheDocument(); await waitFor(() => { expect(screen.getByText('John')).toBeInTheDocument(); });});最佳实践:测试用户行为,而不是实现细节使用 @testing-library/react 而不是 enzyme使用 data-testid 作为最后的选择避免测试内部状态,测试可见输出保持测试简单和可读
阅读 0·2月19日 19:53

如何在 Ollama 中使用流式响应(streaming)来实时生成文本?

Ollama 支持流式响应,这对于需要实时显示生成内容的应用场景非常重要。1. 启用流式响应:在 API 调用时设置 "stream": true 参数:curl http://localhost:11434/api/generate -d '{ "model": "llama3.1", "prompt": "Tell me a story about AI", "stream": true}'2. Python 流式响应示例:import ollama# 流式生成文本for chunk in ollama.generate(model='llama3.1', prompt='Tell me a story', stream=True): print(chunk['response'], end='', flush=True)# 流式对话messages = [ {'role': 'user', 'content': 'Explain quantum computing'}]for chunk in ollama.chat(model='llama3.1', messages=messages, stream=True): if 'message' in chunk: print(chunk['message']['content'], end='', flush=True)3. JavaScript 流式响应示例:import ollama from 'ollama-js'const client = new ollama.Ollama()// 流式生成const stream = await client.generate({ model: 'llama3.1', prompt: 'Tell me a story', stream: true})for await (const chunk of stream) { process.stdout.write(chunk.response)}4. 使用 requests 库处理流式响应:import requestsimport jsonresponse = requests.post( 'http://localhost:11434/api/generate', json={ 'model': 'llama3.1', 'prompt': 'Hello, how are you?', 'stream': True }, stream=True)for line in response.iter_lines(): if line: data = json.loads(line) print(data.get('response', ''), end='', flush=True)5. 流式响应的优势:更好的用户体验:实时显示生成内容,减少等待时间更低的内存占用:不需要缓存完整响应更快的首字时间:立即开始显示内容更好的交互性:用户可以提前看到部分结果6. 处理流式响应的注意事项:正确处理 JSON 行,每行都是一个独立的 JSON 对象处理连接中断和重连逻辑考虑添加超时机制实现取消功能以停止生成7. 高级流式处理:import ollamafrom queue import Queuefrom threading import Threaddef stream_to_queue(queue, model, prompt): for chunk in ollama.generate(model=model, prompt=prompt, stream=True): queue.put(chunk['response']) queue.put(None) # 结束标记# 使用队列处理流式响应queue = Queue()thread = Thread(target=stream_to_queue, args=(queue, 'llama3.1', 'Tell me a story'))thread.start()while True: chunk = queue.get() if chunk is None: break print(chunk, end='', flush=True)thread.join()
阅读 0·2月19日 19:51

如何在 Ollama 中实现多模型并发运行和资源管理?

Ollama 支持多模型并发运行,这对于需要同时处理多个请求或使用不同模型的应用场景非常有用。1. 查看运行中的模型:# 查看当前加载的模型ollama ps输出示例:NAME ID SIZE PROCESSOR UNTILllama3.1 1234567890 4.7GB 100% GPU 4 minutes from nowmistral 0987654321 4.2GB 100% GPU 2 minutes from now2. 并发请求处理:Ollama 自动处理并发请求,无需额外配置:import ollamaimport concurrent.futuresdef generate_response(prompt, model): response = ollama.generate(model=model, prompt=prompt) return response['response']# 并发执行多个请求with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor: futures = [ executor.submit(generate_response, "Tell me a joke", "llama3.1"), executor.submit(generate_response, "Explain AI", "mistral"), executor.submit(generate_response, "Write code", "codellama") ] for future in concurrent.futures.as_completed(futures): print(future.result())3. 配置并发参数:在 Modelfile 中设置并发相关参数:FROM llama3.1# 设置并行处理数量PARAMETER num_parallel 4# 设置批处理大小PARAMETER num_batch 5124. 使用不同模型处理不同任务:import ollama# 使用不同模型处理不同类型的任务def process_request(task_type, input_text): if task_type == "chat": return ollama.generate(model="llama3.1", prompt=input_text) elif task_type == "code": return ollama.generate(model="codellama", prompt=input_text) elif task_type == "analysis": return ollama.generate(model="mistral", prompt=input_text)5. 模型切换和卸载:# 手动卸载模型(释放内存)ollama stop llama3.1# 重新加载模型ollama run llama3.16. 资源管理策略:内存管理:监控内存使用情况根据硬件资源调整并发数量定期卸载不常用的模型GPU 分配:# 指定 GPU 层数PARAMETER num_gpu 35# 完全使用 GPUPARAMETER num_gpu 997. 高级并发模式:import ollamafrom queue import Queueimport threadingclass ModelPool: def __init__(self, models): self.models = models self.queue = Queue() def worker(self): while True: task = self.queue.get() if task is None: break model, prompt = task response = ollama.generate(model=model, prompt=prompt) print(f"{model}: {response['response'][:50]}...") self.queue.task_done() def start_workers(self, num_workers=3): for _ in range(num_workers): threading.Thread(target=self.worker, daemon=True).start() def add_task(self, model, prompt): self.queue.put((model, prompt))# 使用模型池pool = ModelPool(["llama3.1", "mistral", "codellama"])pool.start_workers(3)pool.add_task("llama3.1", "Hello")pool.add_task("mistral", "Hi")pool.add_task("codellama", "Write code")
阅读 0·2月19日 19:51

如何在 Python、JavaScript 等编程语言中集成 Ollama?

Ollama 可以轻松集成到各种编程语言和框架中:Python 集成:使用 ollama Python 库:import ollama# 生成文本response = ollama.generate(model='llama3.1', prompt='Hello, how are you?')print(response['response'])# 对话messages = [ {'role': 'user', 'content': 'Hello!'}, {'role': 'assistant', 'content': 'Hi there!'}, {'role': 'user', 'content': 'How are you?'}]response = ollama.chat(model='llama3.1', messages=messages)print(response['message']['content'])# 流式响应for chunk in ollama.generate(model='llama3.1', prompt='Tell me a story', stream=True): print(chunk['response'], end='', flush=True)JavaScript/Node.js 集成:使用 ollama-js 库:import ollama from 'ollama-js'const client = new ollama.Ollama()// 生成文本const response = await client.generate({ model: 'llama3.1', prompt: 'Hello, how are you?'})console.log(response.response)// 对话const chat = await client.chat({ model: 'llama3.1', messages: [ { role: 'user', content: 'Hello!' }, { role: 'assistant', content: 'Hi there!' }, { role: 'user', content: 'How are you?' } ]})console.log(chat.message.content)Go 集成:package mainimport ( "bytes" "encoding/json" "fmt" "net/http")type GenerateRequest struct { Model string `json:"model"` Prompt string `json:"prompt"`}type GenerateResponse struct { Response string `json:"response"`}func main() { req := GenerateRequest{ Model: "llama3.1", Prompt: "Hello, how are you?", } body, _ := json.Marshal(req) resp, _ := http.Post("http://localhost:11434/api/generate", "application/json", bytes.NewBuffer(body)) var result GenerateResponse json.NewDecoder(resp.Body).Decode(&result) fmt.Println(result.Response)}LangChain 集成:from langchain_community.llms import Ollamallm = Ollama(model="llama3.1")# 简单调用response = llm.invoke("Tell me a joke")print(response)# 链式调用from langchain.prompts import ChatPromptTemplatefrom langchain.schema import StrOutputParserprompt = ChatPromptTemplate.from_template("Tell me a {adjective} joke about {topic}")chain = prompt | llm | StrOutputParser()print(chain.invoke({"adjective": "funny", "topic": "programming"}))REST API 集成:任何支持 HTTP 的语言都可以直接调用 REST API:curl http://localhost:11434/api/generate -d '{ "model": "llama3.1", "prompt": "Hello, how are you?", "stream": false}'
阅读 0·2月19日 19:50

Ollama 在生产环境中的部署和最佳实践是什么?

Ollama 在生产环境部署时需要考虑以下关键方面:1. 系统要求:硬件要求:CPU:支持 AVX2 指令集的现代处理器内存:至少 8GB RAM,推荐 16GB+存储:SSD 存储,每个模型 4-20GBGPU(可选):NVIDIA GPU(CUDA 11.0+)或 Apple Silicon(M1/M2/M3)操作系统:Linux(推荐 Ubuntu 20.04+)macOS 11+Windows 10/112. 部署架构:单机部署:# 安装并启动服务ollama serve# 默认监听 0.0.0.0:11434Docker 部署:FROM ollama/ollama# 复制自定义模型COPY my-model.gguf /root/.ollama/models/# 启动服务CMD ["ollama", "serve"]# 运行容器docker run -d -v ollama:/root/.ollama -p 11434:11434 --gpus all ollama/ollama3. 负载均衡:使用 Nginx 作为反向代理:upstream ollama_backend { server 192.168.1.10:11434; server 192.168.1.11:11434; server 192.168.1.12:11434;}server { listen 80; server_name ollama.example.com; location / { proxy_pass http://ollama_backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; }}4. 监控和日志:健康检查:curl http://localhost:11434/api/tags日志管理:# 查看实时日志ollama logs -f# 配置日志级别export OLLAMA_LOG_LEVEL=debug5. 安全配置:API 认证:使用反向代理添加认证:location /api/ { auth_basic "Restricted"; auth_basic_user_file /etc/nginx/.htpasswd; proxy_pass http://localhost:11434/api/;}防火墙配置:# 只允许特定 IP 访问ufw allow from 192.168.1.0/24 to any port 114346. 性能优化:模型预加载:# 启动时预加载模型ollama run llama3.1 &并发处理:# Modelfile 中设置PARAMETER num_parallel 47. 备份和恢复:# 备份模型tar -czf ollama-backup.tar.gz ~/.ollama/# 恢复模型tar -xzf ollama-backup.tar.gz -C ~/
阅读 0·2月19日 19:50

Puppeteer 如何处理动态网页和单页应用(SPA)?有哪些处理异步加载和路由变化的技巧?

Puppeteer 在处理动态网页和单页应用(SPA)时具有独特的优势,可以执行 JavaScript、等待异步加载、处理路由变化等。1. 处理动态内容加载等待元素出现:const puppeteer = require('puppeteer');async function scrapeDynamicContent() { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://example.com'); // 等待动态加载的元素 await page.waitForSelector('.dynamic-content', { visible: true }); const content = await page.$eval('.dynamic-content', el => el.textContent); console.log(content); await browser.close();}scrapeDynamicContent();等待特定条件:await page.waitForFunction(() => { return document.querySelectorAll('.item').length > 0;});等待网络请求完成:await page.goto('https://example.com', { waitUntil: 'networkidle2'});2. 处理无限滚动基本无限滚动:async function scrapeInfiniteScroll() { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://example.com/infinite-scroll'); const items = []; let previousHeight = 0; while (true) { // 滚动到底部 await page.evaluate(() => { window.scrollBy(0, window.innerHeight); }); // 等待新内容加载 await page.waitForTimeout(1000); // 检查是否有新内容 const currentHeight = await page.evaluate(() => document.body.scrollHeight); if (currentHeight === previousHeight) { break; // 没有新内容了 } previousHeight = currentHeight; // 收集数据 const newItems = await page.$$eval('.item', elements => { return elements.map(el => el.textContent); }); items.push(...newItems); } await browser.close(); return items;}优化的无限滚动:async function scrapeInfiniteScrollOptimized() { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://example.com/infinite-scroll'); const items = []; let noNewItemsCount = 0; while (noNewItemsCount < 3) { // 连续 3 次没有新内容就停止 const itemCountBefore = items.length; // 滚动到底部 await page.evaluate(() => { window.scrollTo(0, document.body.scrollHeight); }); // 等待加载指示器消失 try { await page.waitForSelector('.loading', { hidden: true, timeout: 3000 }); } catch (error) { // 加载指示器可能不存在 } // 收集新数据 const newItems = await page.$$eval('.item', elements => { return elements.map(el => el.textContent); }); if (newItems.length === itemCountBefore) { noNewItemsCount++; } else { noNewItemsCount = 0; items.push(...newItems); } } await browser.close(); return items;}3. 处理 SPA 路由监听路由变化:async function handleSPARoutes() { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://example.com'); // 监听路由变化 page.on('framenavigated', async (frame) => { console.log('Navigated to:', frame.url()); // 等待页面内容加载 await frame.waitForSelector('.content'); const title = await frame.$eval('.content', el => el.textContent); console.log('Page title:', title); }); // 点击导航链接 await page.click('#about-link'); await page.waitForTimeout(1000); await page.click('#contact-link'); await page.waitForTimeout(1000); await browser.close();}等待特定路由:async function waitForRoute(page, path) { return new Promise((resolve) => { const checkRoute = async () => { const currentPath = await page.evaluate(() => window.location.pathname); if (currentPath === path) { resolve(); } else { setTimeout(checkRoute, 100); } }; checkRoute(); });}// 使用await page.click('#about-link');await waitForRoute(page, '/about');4. 处理 AJAX 请求等待特定 API 响应:async function waitForAPIResponse(page, urlPattern) { return new Promise((resolve) => { page.on('response', (response) => { if (response.url().includes(urlPattern)) { resolve(response); } }); });}// 使用const apiResponse = await Promise.all([ waitForAPIResponse(page, '/api/data'), page.click('#load-data-button')]);const data = await apiResponse.json();console.log(data);拦截和修改 API 请求:await page.setRequestInterception(true);page.on('request', (request) => { if (request.url().includes('/api/data')) { // 修改请求 request.continue({ headers: { ...request.headers(), 'Authorization': 'Bearer token' } }); } else { request.continue(); }});5. 处理 WebSocket监听 WebSocket 消息:const client = await page.target().createCDPSession();await client.send('Network.enable');client.on('Network.webSocketFrameReceived', (params) => { console.log('WebSocket message:', params.response.payloadData);});client.on('Network.webSocketFrameSent', (params) => { console.log('WebSocket sent:', params.response.payloadData);});6. 处理客户端渲染等待客户端渲染完成:async function waitForClientRendering(page) { // 方法 1:等待特定元素 await page.waitForSelector('.rendered-content'); // 方法 2:等待渲染标志 await page.waitForFunction(() => { return window.__RENDER_COMPLETE__ === true; }); // 方法 3:等待网络空闲 await page.waitForFunction(() => { return performance.getEntriesByType('resource').length > 0; });}处理 React/Vue 应用:async function scrapeReactApp() { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://example.com/react-app'); // 等待 React 应用挂载 await page.waitForSelector('#root'); // 等待数据加载完成 await page.waitForFunction(() => { return window.__INITIAL_STATE__?.loaded === true; }); // 与 React 应用交互 await page.click('#load-more-button'); await page.waitForSelector('.new-items'); const items = await page.$$eval('.item', elements => { return elements.map(el => el.textContent); }); await browser.close(); return items;}7. 实际应用场景场景 1:抓取社交媒体动态内容async function scrapeSocialMediaPosts(username) { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto(`https://social-media.com/${username}`); const posts = []; // 滚动加载更多帖子 while (posts.length < 50) { // 滚动到底部 await page.evaluate(() => { window.scrollBy(0, window.innerHeight); }); // 等待新帖子加载 await page.waitForTimeout(2000); // 收集帖子数据 const newPosts = await page.$$eval('.post', elements => { return elements.map(post => ({ id: post.dataset.id, content: post.querySelector('.content')?.textContent, likes: post.querySelector('.likes')?.textContent, timestamp: post.querySelector('.timestamp')?.textContent })); }); // 只添加新帖子 const newPostIds = new Set(posts.map(p => p.id)); const uniqueNewPosts = newPosts.filter(p => !newPostIds.has(p.id)); posts.push(...uniqueNewPosts); } await browser.close(); return posts;}场景 2:抓取电商网站商品列表async function scrapeEcommerceProducts(categoryUrl) { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto(categoryUrl); const products = []; while (true) { // 等待商品加载 await page.waitForSelector('.product-card'); // 收集当前页商品 const pageProducts = await page.$$eval('.product-card', cards => { return cards.map(card => ({ id: card.dataset.id, title: card.querySelector('.title')?.textContent, price: card.querySelector('.price')?.textContent, rating: card.querySelector('.rating')?.textContent })); }); products.push(...pageProducts); // 检查是否有下一页 const nextButton = await page.$('.next-page:not(.disabled)'); if (!nextButton) { break; } // 点击下一页 await nextButton.click(); await page.waitForTimeout(1000); } await browser.close(); return products;}场景 3:抓取实时数据更新async function scrapeRealTimeData(url) { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto(url); const dataUpdates = []; // 监听 DOM 变化 await page.evaluate(() => { const observer = new MutationObserver((mutations) => { mutations.forEach((mutation) => { if (mutation.type === 'childList') { window.__DATA_UPDATES__ = window.__DATA_UPDATES__ || []; window.__DATA_UPDATES__.push({ timestamp: Date.now(), addedNodes: mutation.addedNodes.length }); } }); }); observer.observe(document.body, { childList: true, subtree: true }); }); // 等待一段时间收集数据 await page.waitForTimeout(30000); // 获取收集的数据 const updates = await page.evaluate(() => { return window.__DATA_UPDATES__ || []; }); await browser.close(); return updates;}8. 最佳实践1. 使用适当的等待策略:// 优先使用 waitForSelectorawait page.waitForSelector('.element');// 复杂条件使用 waitForFunctionawait page.waitForFunction(() => { return document.querySelectorAll('.item').length > 10;});// 网络请求使用 waitForResponseawait page.waitForResponse(response => response.url().includes('/api/data'));2. 避免硬编码等待时间:// 不好的做法await page.waitForTimeout(5000);// 好的做法await page.waitForSelector('.loaded-content');3. 处理加载失败:try { await page.waitForSelector('.content', { timeout: 10000 });} catch (error) { console.log('Content failed to load, using fallback'); // 使用备用策略}4. 优化性能:// 禁用不必要的资源await page.setRequestInterception(true);page.on('request', (request) => { if (['image', 'font', 'media'].includes(request.resourceType())) { request.abort(); } else { request.continue(); }});5. 处理反爬虫:// 设置真实的用户代理await page.setUserAgent('Mozilla/5.0 ...');// 添加随机延迟const randomDelay = () => Math.random() * 2000 + 1000;await page.waitForTimeout(randomDelay());// 模拟人类行为await page.evaluate(() => { window.scrollBy(0, Math.random() * 500);});
阅读 0·2月19日 19:49